GANs (Generative Adversarial Networks) are systems where two neural networks are pitted against one another: a generator which synthesizes images or data, and a discriminator which scores how plausible the results are. The system feeds back on itself to incrementally improve its score.
CLIP (Contrastive Language-Image Pre-training) is a companion third neural network which finds images based on natural language descriptions, which are what’s initially fed into the VQGAN.
VQGAN: Vector Quantized Generative Adversarial Network
Released by Katherine Crowson @RiversHaveWings and Ryan Murdoch @advadnoun in Apr/2021.
CLIP: Contrastive Language-Image Pre-training
Released by OpenAI in Jan/2021.
Image size: 600×600
Please do not over-use this platform, as it is a small instance with a queue already!
AI art generated by GLIDE (Dec/2021)
impossible labyrinth at night
joyful vivid color
lighting store candelabra
studio ghibli landscape
Get The Memo
by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution. Hundreds of paid subscribers. Readers from Microsoft, Tesla, & Google AI. Artificial intelligence that matters, as it happens, in plain English. Get The Memo.
⬛ Dr Alan D. Thompson is an AI expert and consultant. With Leta (an AI powered by GPT-3), Alan co-presented a seminar called ‘The new irrelevance of intelligence’ at the World Gifted Conference in August 2021. His applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021. He has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. He is open to consulting and advisory on major AI projects with intergovernmental organisations and enterprise.