One of the recurring themes in the big data & data streaming worlds at the moment is developer experience. It seems like every major tool is trying to answer this question: how do we make large-scale data processing feel trivial?
In some places the answer is any library you like as long as it’s Python. In other realms, a mixture of Java and SQL shows promise. But as this week’s guest—Luca Pette—would say, the Unix design metaphor has plenty to give and keep on giving.
So in this episode of Developer Voices we look at TypeStream - his Kotlin project that provides a shell-like interface to data pipelines, and is gradually expanding to make integration pipelines as simple as cat /dev/kafka | tee /dev/postgres
.
--
Luca on Twitter: https://twitter.com/lucapette
Luca on LinkedIn: https://www.linkedin.com/in/lucapette/
Kris on Twitter: https://twitter.com/krisajenkinsKris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
TypeStream homepage: https://www.typestream.io/
TypeStream installation guide: https://docs.typestream.io/tutorial/installation
Crafting interpreters: https://craftinginterpreters.com/
…by Bob Nystrom: https://twitter.com/munificentbob
NuShell: https://github.com/nushell/nushell
#podcast #apachekafka #bigdata