Data pipelines are the core of every data product, ML model, and business intelligence dashboard. If you're not careful you will end up spending all of your time on maintenance and fire-fighting. The folks at Rivery distilled the seven principles of modern data pipelines that will help you stay out of trouble and be productive with your data. In this episode Ariel Pohoryles explains what they are and how they work together to increase your chances of success.
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack)
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold)
Your host is Tobias Macey and today I'm interviewing Ariel Pohoryles about the seven principles of modern data pipelines
Introduction
How did you get involved in the area of data management?
Can you start by defining what you mean by a "modern" data pipeline?
At Rivery you published a white paper identifying seven principles of modern data pipelines:
Zero infrastructure management
ELT-first mindset
Speaks SQL and Python
Dynamic multi-storage layers
Reverse ETL & operational analytics
Full transparency
Faster time to value
What are the applications of data that you focused on while identifying these principles?
How do the application of these principles influence the ability of organizations and their data teams to encourage and keep pace with the use of data in the business?
What are the technical components of a pipeline infrastructure that are necessary to support a "modern" workflow?
How do the technologies involved impact the organizational involvement with how data is applied throughout the business?
When using managed services, what are the ways that the pricing model acts to encourage/discourage experimentation/exploration with data?
What are the most interesting, innovative, or unexpected ways that you have seen these seven principles implemented/applied?
What are the most interesting, unexpected, or challenging lessons that you have learned while working with customers to adapt to these principles?
What are the cases where some/all of these principles are undesirable/impractical to implement?
What are the opportunities for further advancement/sophistication in the ways that teams work with and gain value from data?
Thank you for listening! Don't forget to check out our other shows. Podcast.init) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast) helps you go from idea to production with machine learning.
Visit the site) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email [email protected])) with your story.
To help other people find the show please leave a review on Apple Podcasts) and tell your friends and co-workers
ELT)
The intro and outro music is from The Hug) by The Freak Fandango Orchestra) / CC BY-SA)
Sponsored By:
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold today!](https://www.dataengineeringpodcast.com/datafold))
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack](https://www.rudderstack.com/product/profiles/?utm_source=data_eng_podcast&utm_medium=podcast&utm_campaign=CMPGN_173_P&utm_content=None&utm_term=%7Bkeyword%7D&utm_referrer=data_eng_podcast&raid=afdb0d1c0f9e89f23c581928d8122f5d))