Home

Training Data

Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more

Episodes

Total: 24

Founded in early 2023 after spending years at Stripe and OpenAI, Gabriel Hubert and Stanislas Polu s

Clay is leveraging AI to help go-to-market teams unleash creativity and be more effective in their w

Can GenAI allow us to connect our imagination to what we see on our screens? Decart’s Dean Leitersdo

Years before co-founding Glean, Arvind was an early Google employee who helped design the search alg

In recent years there’s been an influx of theoretical physicists into the leading AI labs. Do they h

NotebookLM from Google Labs has become the breakout viral AI product of the year. The feature that c

All of us as consumers have felt the magic of ChatGPT—but also the occasional errors and hallucinati

Combining LLMs with AlphaGo-style deep reinforcement learning has been a holy grail for many leading

Adding code to LLM training data is a known method of improving a model’s reasoning skills. But woul

AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD a

There’s a new archetype in Silicon Valley, the AI researcher turned founder. Instead of tinkering in

On Training Data, we learn from innovators pushing forward the frontier of AI’s capabilities. Today

Customer service is hands down the first killer app of generative AI for businesses. The reasons are

After AlphaGo beat Lee Sedol, a young mechanical engineer at Google thought of another game reinforc

In the first wave of the generative AI revolution, startups and enterprises built on top of the best

GithHub invented collaborative coding and in the process changed how open source projects, startups

As head of Product Management for Generative AI at Meta, Joe Spisak leads the team behind Llama, whi

In February, Sebastian Siemiatkowski boldly announced that Klarna’s new OpenAI-powered assistant han

LLMs are democratizing digital intelligence, but we’re all waiting for AI agents to take this to the

The current LLM era is the result of scaling the size of models in successive waves (and the compute