A deeper look at the concept illustrated in Slide 18, including examples, applications, and how it works technically.
Slide 18 introduces how generative models learn representations from data and use them to produce new content that resembles the original training domain. The slide highlights the shift from traditional rule‑based or discriminative AI to model architectures capable of generating text, images, and other media through learned probability distributions.
Models learn patterns, structures, and features from large datasets without explicit rules.
Generators create new data by sampling from learned probability distributions.
Models use missing‑data prediction tasks to build internal world models.
Large datasets of images, text, audio, or code.
Transformers encode patterns and relationships.
Models build abstract internal representation spaces.
Sampling from the latent space produces new content.
It shows how generative models rely on learned internal representations to create new examples that fit a data distribution.
It allows models to generalize and generate coherent, context‑aware outputs.
Transformers, VAEs, diffusion models, and GANs.
Explore more concepts about how generative AI works.
Next Slide