Autoencoders, transformers, latent space: Learn the elements of generative AI and hear what data scientist David Foster has to say about the potential for generative AI in music, as well as the role that world models play in blending generative AI with reinforcement learning.This episode is brought to you by Posit), the open-source data science company, by Anaconda), the world's most popular Python distribution, and by WithFeeling.ai), the company bringing humanity into AI. Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast) for sponsorship information.In this episode you will learn:• Generative modeling vs discriminative modeling [04:21]• Generative AI for Music [13:12]• On the threats of AI [23:15]• Autoencoders Explained [38:36]• Noise in Generative AI [48:11]• What CLIP models are (Contrastive Language-Image Pre-training) [54:07]• What World Models are [1:00:40]• What a Transformer is [1:11:14]• How to use transformers for music generation [1:19:50]Additional materials: www.superdatascience.com/687)