coldplay chris martin concert comic unreal engine
love peace joy kindness artstationHQ
Greta Thunberg climate change campaigner
leta ai in berlin
mott macdonald artstationHQ
algorithm, google, cloud, deep learning, machine learning
In this blue image above, the keywords were suggested by Leta (GPT-3), so it is effectively AI generating AI images!
GANs (Generative Adversarial Networks) are systems where two neural networks are pitted against one another: a generator which synthesizes images or data, and a discriminator which scores how plausible the results are. The system feeds back on itself to incrementally improve its score.
CLIP (Contrastive Language-Image Pre-training) is a companion third neural network which finds images based on natural language descriptions, which are what’s initially fed into the VQGAN.
VQGAN: Vector Quantized Generative Adversarial Network
Released by Katherine Crowson @RiversHaveWings and Ryan Murdoch @advadnoun in Apr/2021.
CLIP: Contrastive Language-Image Pre-training
Released by OpenAI in Jan/2021.
Image size: 600×600
Please do not over-use this platform, as it is a small instance with a queue already!