cover of episode 689: Observing LLMs in Production to Automatically Catch Issues

689: Observing LLMs in Production to Automatically Catch Issues

2023/6/20
logo of podcast Super Data Science: ML & AI Podcast with Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

Frequently requested episodes will be transcribed first

Shownotes Transcript

Arize's Amber Roberts and Xander Song join Jon Krohn this week, sharing invaluable insights into ML Observability, drift detection, retraining strategies, and the crucial task of ensuring fairness and ethical considerations in AI development.This episode is brought to you by Posit), the open-source data science company, by AWS Inferentia), and by Anaconda), the world's most popular Python distribution. Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast) for sponsorship information.In this episode you will learn:• What is ML Observability [05:07]• What is Drift [08:18]• The different kinds of model drift [15:31]• How frequently production models should be retrained? [25:15]• Arize's open-source product, Phoenix [30:49]• How ML Observability relates to discovering model biases [50:30]• Arize case studies [57:13]• What is a developer advocate [1:04:51]Additional materials: www.superdatascience.com/689)