cover of episode 670: LLaMA: GPT-3 performance, 10x smaller

670: LLaMA: GPT-3 performance, 10x smaller

2023/4/14
logo of podcast Super Data Science: ML & AI Podcast with Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

Frequently requested episodes will be transcribed first

Shownotes Transcript

How does Meta AI's natural language model, LLaMa compare to the rest? Based on the Chinchilla scaling laws, LLaMa is designed to be smaller but more performant. But how exactly does it achieve this feat? It's all done by training a small model for a longer period of time. Discover how LLaMa compares to its competition, including GPT-3, in this week's episode. Additional materials: www.superdatascience.com/670)Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast) for sponsorship information.