cover of episode 759: Full Encoder-Decoder Transformers Fully Explained, with Kirill Eremenko

759: Full Encoder-Decoder Transformers Fully Explained, with Kirill Eremenko

2024/2/20
logo of podcast Super Data Science: ML & AI Podcast with Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

Frequently requested episodes will be transcribed first

Shownotes Transcript

Encoders, cross attention and masking for LLMs: SuperDataScience Founder Kirill Eremenko returns to the SuperDataScience podcast, where he speaks with Jon Krohn about transformer architectures and why they are a new frontier for generative AI. If you’re interested in applying LLMs to your business portfolio, you’ll want to pay close attention to this episode!This episode is brought to you by Ready Tensor), where innovation meets reproducibility, by Oracle NetSuite) business software, and by Intel and HPE Ezmeral Software Solutions). Interested in sponsoring a SuperDataScience Podcast episode? Visit passionfroot.me/superdatascience) for sponsorship information.In this episode you will learn:• How decoder-only transformers work [15:51]• How cross-attention works in transformers [41:05]• How encoders and decoders work together (an example) [52:46]• How encoder-only architectures excel at understanding natural language [1:20:34]• The importance of masking during self-attention [1:27:08]Additional materials: www.superdatascience.com/759)