SummaryMachine learning workflows have long been complex and difficult to operationalize. They are often characterized by a period of research, resulting in an artifact that gets passed to another engineer or team to prepare for running in production. The MLOps category of tools have tried to build a new set of utilities to reduce that friction, but have instead introduced a new barrier at the team and organizational level. Donny Greenberg took the lessons that he learned on the PyTorch team at Meta and created Runhouse. In this episode he explains how, by reducing the number of opinions in the framework, he has also reduced the complexity of moving from development to production for ML systems.Announcements
Interview
Introduction
How did you get involved in machine learning?
What are the core elements of infrastructure for ML and AI?
How has that changed over the past ~5 years?
For the past few years the MLOps and data engineering stacks were built and managed separately. How does the current generation of tools and product requirements influence the present and future approach to those domains?
There are numerous projects that aim to bridge the complexity gap in running Python and ML code from your laptop up to distributed compute on clouds (e.g. Ray, Metaflow, Dask, Modin, etc.). How do you view the decision process for teams trying to understand which tool(s) to use for managing their ML/AI developer experience?
Can you describe what Runhouse is and the story behind it?
What are the core problems that you are working to solve?
What are the main personas that you are focusing on? (e.g. data scientists, DevOps, data engineers, etc.)
How does Runhouse factor into collaboration across skill sets and teams?
Can you describe how Runhouse is implemented?
How has the focus on developer experience informed the way that you think about the features and interfaces that you include in Runhouse?
How do you think about the role of Runhouse in the integration with the AI/ML and data ecosystem?
What does the workflow look like for someone building with Runhouse?
What is involved in managing the coordination of compute and data locality to reduce networking costs and latencies?
What are the most interesting, innovative, or unexpected ways that you have seen Runhouse used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Runhouse?
When is Runhouse the wrong choice?
What do you have planned for the future of Runhouse?
What is your vision for the future of infrastructure and developer experience in ML/AI?
Contact Info
Parting Question
Closing Announcements
Links
The intro and outro music is from Hitman's Lovesong feat. Paola Graziano) by The Freak Fandango Orchestra)/CC BY-SA 3.0)