To process your data you need to know what shape it has, which is why schemas are important. When you are processing that data in multiple systems it can be difficult to ensure that they all have an accurate representation of that schema, which is why Confluent has built a schema registry that plugs into Kafka. In this episode Ewen Cheslack-Postava explains what the schema registry is, how it can be used, and how they built it. He also discusses how it can be extended for other deployment targets and use cases, and additional features that are planned for future releases.
Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode) and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd) to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind.
Go to dataengineeringpodcast.com) to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
You can help support the show by checking out the Patreon page) which is linked from the site.
To help other people find the show you can leave a review on iTunes), or Google Play Music), and tell your friends and co-workers
Your host is Tobias Macey and today I’m interviewing Ewen Cheslack-Postava about the Confluent Schema Registry
Introduction
How did you get involved in the area of data engineering?
What is the schema registry and what was the motivating factor for building it?
If you are using Avro, what benefits does the schema registry provide over and above the capabilities of Avro’s built in schemas?
How did you settle on Avro as the format to support and what would be involved in expanding that support to other serialization options?
Conversely, what would be involved in using a storage backend other than Kafka?
What are some of the alternative technologies available for people who aren’t using Kafka in their infrastructure?
What are some of the biggest challenges that you faced while designing and building the schema registry?
What is the tipping point in terms of system scale or complexity when it makes sense to invest in a shared schema registry and what are the alternatives for smaller organizations?
What are some of the features or enhancements that you have in mind for future work?
Avro)
The intro and outro music is from The Hug) by The Freak Fandango Orchestra) / CC BY-SA)