Home
cover of episode Data Engineering Weekly with Joe Crobak - Episode 27

Data Engineering Weekly with Joe Crobak - Episode 27

2018/4/15
logo of podcast Data Engineering Podcast

Data Engineering Podcast

Frequently requested episodes will be transcribed first

Shownotes Transcript

Summary

The rate of change in the data engineering industry is alternately exciting and exhausting. Joe Crobak found his way into the work of data management by accident as so many of us do. After being engrossed with researching the details of distributed systems and big data management for his work he began sharing his findings with friends. This led to his creation of the Hadoop Weekly newsletter, which he recently rebranded as the Data Engineering Weekly newsletter. In this episode he discusses his experiences working as a data engineer in industry and at the USDS, his motivations and methods for creating a newsleteter, and the insights that he has gleaned from it.

Preamble

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management

  • When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode) to get a $20 credit and launch a new server in under a minute.

  • Go to dataengineeringpodcast.com) to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.

  • Your host is Tobias Macey and today I’m interviewing Joe Crobak about his work maintaining the Data Engineering Weekly newsletter, and the challenges of keeping up with the data engineering industry.

Interview

  • Introduction

  • How did you get involved in the area of data management?

  • What are some of the projects that you have been involved in that were most personally fulfilling?

  • As an engineer at the USDS working on the healthcare.gov and medicare systems, what were some of the approaches that you used to manage sensitive data?

  • Healthcare.gov has a storied history, how did the systems for processing and managing the data get architected to handle the amount of load that it was subjected to?

  • What was your motivation for starting a newsletter about the Hadoop space?

  • Can you speak to your reasoning for the recent rebranding of the newsletter?

  • How much of the content that you surface in your newsletter is found during your day-to-day work, versus explicitly searching for it?

  • After over 5 years of following the trends in data analytics and data infrastructure what are some of the most interesting or surprising developments?

  • What have you found to be the fundamental skills or areas of experience that have maintained relevance as new technologies in data engineering have emerged?

  • What is your workflow for finding and curating the content that goes into your newsletter?

  • What is your personal algorithm for filtering which articles, tools, or commentary gets added to the final newsletter?

  • How has your experience managing the newsletter influenced your areas of focus in your work and vice-versa?

  • What are your plans going forward?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

The intro and outro music is from The Hug) by The Freak Fandango Orchestra) / CC BY-SA)

Support Data Engineering Podcast)