Home
cover of episode Snorkel: Extracting Value From Dark Data with Alex Ratner - Episode 15

Snorkel: Extracting Value From Dark Data with Alex Ratner - Episode 15

2018/1/22
logo of podcast Data Engineering Podcast

Data Engineering Podcast

Frequently requested episodes will be transcribed first

Shownotes Transcript

Summary

The majority of the conversation around machine learning and big data pertains to well-structured and cleaned data sets. Unfortunately, that is just a small percentage of the information that is available, so the rest of the sources of knowledge in a company are housed in so-called “Dark Data” sets. In this episode Alex Ratner explains how the work that he and his fellow researchers are doing on Snorkel can be used to extract value by leveraging labeling functions written by domain experts to generate training sets for machine learning models. He also explains how this approach can be used to democratize machine learning by making it feasible for organizations with smaller data sets than those required by most tooling.

Preamble

  • Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure

  • When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode) and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.

  • Go to dataengineeringpodcast.com) to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.

  • You can help support the show by checking out the Patreon page) which is linked from the site.

  • To help other people find the show you can leave a review on iTunes), or Google Play Music), and tell your friends and co-workers

  • Your host is Tobias Macey and today I’m interviewing Alex Ratner about Snorkel and Dark Data

Interview

  • Introduction

  • How did you get involved in the area of data management?

  • Can you start by sharing your definition of dark data and how Snorkel helps to extract value from it?

  • What are some of the most challenging aspects of building labelling functions and what tools or techniques are available to verify their validity and effectiveness in producing accurate outcomes?

  • Can you provide some examples of how Snorkel can be used to build useful models in production contexts for companies or problem domains where data collection is difficult to do at large scale?

  • For someone who wants to use Snorkel, what are the steps involved in processing the source data and what tooling or systems are necessary to analyse the outputs for generating usable insights?

  • How is Snorkel architected and how has the design evolved over its lifetime?

  • What are some situations where Snorkel would be poorly suited for use?

  • What are some of the most interesting applications of Snorkel that you are aware of?

  • What are some of the other projects that you and your group are working on that interact with Snorkel?

  • What are some of the features or improvements that you have planned for future releases of Snorkel?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

The intro and outro music is from The Hug) by The Freak Fandango Orchestra) / CC BY-SA)

Support Data Engineering Podcast)