SummaryAll data systems are subject to the "garbage in, garbage out" problem. For machine learning applications bad data can lead to unreliable models and unpredictable results. Anomalo is a product designed to alert on bad data by applying machine learning models to various storage and processing systems. In this episode Jeremy Stanley discusses the various challenges that are involved in building useful and reliable machine learning models with unreliable data and the interesting problems that they are solving in the process.Announcements
Interview
Introduction
How did you get involved in machine learning?
Can you describe what Anomalo is and the story behind it?
What are some of the ML approaches that you are using to address challenges with data quality/observability?
What are some of the difficulties posed by your application of ML technologies on data sets that you don't control?
How does the scale and quality of data that you are working with influence/constrain the algorithmic approaches that you are using to build and train your models?
How have you implemented the infrastructure and workflows that you are using to support your ML applications?
What are some of the ways that you are addressing data quality challenges in your own platform?
What are the opportunities that you have for dogfooding your product?
What are the most interesting, innovative, or unexpected ways that you have seen Anomalo used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Anomalo?
When is Anomalo the wrong choice?
What do you have planned for the future of Anomalo?
Contact Info
Parting Question
Closing Announcements
Links
Neural Networks For Pattern Recognition) by Christopher M. Bishop (affiliate link)
dbt)
The intro and outro music is from Hitman's Lovesong feat. Paola Graziano) by The Freak Fandango Orchestra)/CC BY-SA 3.0)