People
A
Autumn Nash
A
Avtar Swithin
J
Jasmine Casas
J
Justin Garrison
K
Kurt Mackie
P
Phillip Carter
Topics
@Jasmine Casas :Sentry 秉持"持续发布"的理念,不断迭代改进产品,并积极创新以提升开发者调试的信心。他们倾听客户需求,关注痛点,在已发布产品上持续迭代,同时也会主动创新,提供对开发者有益的解决方案,帮助他们更自信地进行调试。

Deep Dive

Key Insights

Why did Microsoft decide to make .NET cross-platform?

Microsoft aimed to shift .NET from being a Windows-focused development platform to a cross-platform, native solution that could run on Linux, macOS, and Windows, enabling developers to build and deploy applications across different environments seamlessly.

How did Microsoft handle the relationship with the Mono project?

Microsoft had a collaborative relationship with the Mono project, working to unify libraries and eventually acquiring Xamarin, which allowed for a unified runtime that could run on both client devices and backend servers with optimized performance.

What is Honeycomb's approach to observability?

Honeycomb aims to help developers reshape how they introspect their systems by providing tools that allow for a more intuitive debugging flow, focusing on high-level entry points and probabilistic distributions to guide users toward the right queries and insights.

How does Honeycomb use AI to assist developers?

Honeycomb uses AI to help developers by suggesting queries based on natural language input, making it easier for users to explore their data and debug issues without needing to know the exact query structure upfront. This helps users get unstuck and encourages curiosity.

What is the significance of OpenTelemetry for Honeycomb?

OpenTelemetry is crucial for Honeycomb as it provides a standardized way to collect and analyze observability data, which aligns with Honeycomb's goal of helping developers understand their systems. Honeycomb's involvement in OpenTelemetry ensures that their product remains compatible with the evolving standard.

Why did Phillip Carter leave Microsoft for Honeycomb?

Phillip Carter left Microsoft after completing a major milestone with .NET's cross-platform success, seeking new challenges in the developer tools space. Honeycomb's innovative approach to observability and its involvement in OpenTelemetry were key factors in his decision.

How does Honeycomb's natural language querying feature work?

Honeycomb's natural language querying feature uses GPT-3.5 to generate queries based on user input, schema, and example data. It helps users formulate queries for slow requests or other issues by suggesting relevant columns and query shapes, making it easier to explore data without deep knowledge of the system.

What is the future vision for Honeycomb's AI-assisted debugging?

Honeycomb plans to expand its AI-assisted debugging capabilities by offering more suggestions and guidance for users, helping them explore data, and test hypotheses. The goal is to encourage curiosity and make users better at debugging and building systems, aligning with the company's business interests.

Why is Timescale positioning Postgres as the database for AI applications?

Timescale believes Postgres is well-positioned for AI applications due to its extensibility, performance, and scalability. Extensions like PG Vector Scale and PGAI enhance its capabilities for vector search and LLM reasoning, making it a powerful choice for AI developers without needing to manage multiple databases.

How does Fly.io differentiate itself from platforms like Heroku and Vercel?

Fly.io differentiates itself by offering a platform with no hard line boundaries, allowing developers to run their apps close to users globally with primitives that enable deep customization. It provides a no-limits approach compared to platforms like Heroku and Vercel, which have more restrictive abstractions.

Chapters
This chapter includes advertisements for Sentry, Fly.io, and Timescale.
  • Sentry offers error tracking and session replay for web and mobile.
  • Fly.io is a platform for deploying applications.
  • Timescale provides a database purpose-built for AI applications.

Shownotes Transcript

Translations:
中文

One.

This is Ship It with Justin Garrison and Autumn Nash. If you like this show, you will love The Change Log. It's software news on Mondays, deep technical interviews on Wednesdays, and on Fridays, an awesome talk show for your weekend enjoyment. Find it by searching for The Change Log wherever you get your podcasts. Ship It is brought to you by Fly.io. Launch your app in five minutes or less. Learn how at Fly.io. ♪

What's up, friends? I'm here with Jasmine Casas from Sentry. They just wrapped up their launch week and they're always shipping. So, Jasmine, what do you think about Sentry's mantra of always be shipping? So to me, Sentry is a great way

Sentry always shipping is just in the spirit of iteration and constantly improving what we're doing to make developers ship with more confidence. We're always listening to our customers' needs and pain points. We're always iterating on products that we have launched. And on top of that, we are also, even if we look at things that customers haven't explicitly asked for, we are trying to innovate and provide solutions that I think would be really beneficial to help developers debug with more confidence.

I love that. So I know there's been an addition to one of the features out there that kind of exemplifies that. Can you share more? Yes. So something that the team has been developing for a long time and is currently an open beta is our mobile replay product. So historically, replay has always been for web, but now we have brought it on to mobile.

specifically Android, iOS and React Native. So this allows our developers, no matter what stack they're using to get video like reproductions that actually help users see the repro steps that led to an error and also understand the user impact of an error. - Okay, Sentry is always shipping, always helping developers ship with confidence. That's what they do.

Check out their launch week details in the link in the show notes. And of course, check out Session Replay's new edition mobile replay in the link in the show notes as well. And here's the best part. If you want to try Sentry, you can do so today with $100 off the team plan. Totally free for you to try out for you and your team. Use the code CHANGELOG. Go to Sentry.io. Again, Sentry.io.

Hello and welcome to Ship It, the podcast all about everything after Git Push. I'm your host Justin Garrison and with me as always is Autumn Nash. How's it going, Autumn? I'm caffeinated this time. Caffeinated is a good call and caffeinated for our last guest on our show. I know, it's like sad. It

It is sad. So for anyone that's listening, I heard people that just started listening to the show, welcome. We're glad you're here. By the way, this show is not going to continue on past the end of 2024. No way.

So a bit of news is, is changelog is, is switching some stuff up. They want to focus on the main podcast that they have changed, like in friends, the news and the main changelog podcast, which is great. Like that's, I think Jared and Adam have, have a lot of good networks and I love those shows. So continue listening to them. But that did mean they had to make some tough calls about the other shows in the changelog

right now and that includes the ship it show go time JS party and practical AI those four shows are not going to continue after 2024 we'll have the link to the blog post announcing it in the show notes here but also autumn we're starting a new show we couldn't give up being like nerds and talking too much on the internet so everybody

Here we are. I mean, I love this community we've already built. The people that we've already met through the show. All the things I've learned through the show have been awesome. Dude, I've straight made besties from the show. Me and Hazel do hood rat stuff like every weekend. I've been seeing you. Yeah, you're just like, go and have dinner. This is awesome.

I feel like we've come too far. We have to just do it. And the fact that we already had like 20 more awesome guests lined up for next year for Ship It's. I'm really excited to talk to you. Yeah. So we are going to continue the show. The new podcast, in the last episode, we didn't have a title. So finally, we have a title and a domain for it so we can tell people. It's called Fork Around and Find Out. I love it so much. I didn't love any of the other names. Like for name we talked about, I was like, ugh. In that one, I was like, let's do it. Yeah.

There was a lot of names and a lot of them were either just like, eh, that's okay. Or the domain was taken or social was taken, whatever. And F-A-F-O-F-M. If we're going to spend this much money on it, it needs to be fire. Yeah. And that is also the other piece is we do want to make this as sustainable as possible. And when I first emailed Jared and Adam about taking over the ship at PotteryBot,

I didn't want to take it over. I was just like, can I help? Because I liked the show and I wanted to continue. I was like, do you want to co-host? What do you want to do? And my main thing was like, I don't have time to do all of the things required to maintain a podcast long term. And I wanted something that could be long term. And that is the goal for the new podcast as well. And so we do have...

people that we're paying to do stuff that we don't have time to do. Stuff like editing, sponsor management, all that stuff. We don't want to have a premium tier. I don't really expect to have the listeners pay for it. I'm terrible at doing the plus plus content that we have here on Changelog, where I just don't ever think about it in the middle of a conversation. We just talk too much at the time. Yeah, and all the good bits, I just leave it. No, that has to stay in. I feel like it would just make the actual podcast...

You know what I mean? Like saving the good stuff just for people that are paying for it. I mean, and some of it is the good stuff. Some of it is just like the random asides, like the great conversation we had with Austin about kids and managing GeoRatast with chores and stuff like that. Like that's not necessarily good content to me, but it's kind of made his website is so dope. Yeah. It's an aside. It's not something that needs to be in there, but we're just going to leave everything in for the new show. And I hope that sponsors will pay for it. Or that time when there was a poem about like TARS.

Oh, that was amazing. Yeah, that was plus plus. Like a nerdy little heart. It was baller. Yes. If you have not listened to that episode, what number was that? That was so long ago now. But TARS all the way down. 10th episode or something. Yeah. I was blown away by that. It was great. So.

Anyway, the link will be in the show notes. By the time this episode comes out, the website should be available. I don't know that the RSS feed will be in all of the pod catchers, but it should be in most of them that aren't super slow. So look in the show notes. FAFO.FM is the website, and you can subscribe there as well. But we have today our last guest. Philip Carter is a principal PM at Honeycomb. Welcome to the show, Philip. Thanks for having me. Thanks.

I feel like you're going to have to come back on the new podcast. Oh, it's going to be great because we just get to redo all of our amazing guests and just like, hey, this is new. So we have to be like extra ridiculous for this one and then you have to come back. Awesome. Sounds good. I'd be happy to do it.

And in the pre-show, we did determine that being a PM doesn't mean you work at night. And so it's something else beyond that. But tell us a little about your background. Like, how did you end up as a principal PM at Honeycomb? And what software are you responsible for?

Yeah. So I started my career working at Microsoft outside of college and I joined the .NET team. And that was pretty fortuitous because this was around 2015 when we had the very first, very crappy preview of cross-platform .NET that we were building, which at the time was called .NET Core. And this very big...

big goal of like, hey, we want .NET to be from move from like this Windows focused development platform, which is fine, but like clearly not the future of software to be like inherently cross platform, inherently native, like when you deploy it on Linux and like be able to be so that like you could run on Linux and deploy on Windows if you wanted to. You'd be a weirdo for doing that, but like you could, but like you could build your apps on Windows, deploy on Linux, build your apps on Mac.

deploy on Linux, build your apps on Linux, deploy on Linux, like do whatever the heck you want. Work well in containers, like all of the things that we associate with like modern software development these days, like that was, these were all just like big bullet pointed lists of like, we need to add a green checkbox to every single one of these.

And we're going to do it in like five years. So we did. Is it not called .NET Core anymore? No, no, it's just called .NET. And there's huge, huge branding things of like that, you know, the Windows thing was called .NET or the .NET Framework.

And they're like, okay, we still have to keep calling it the .NET framework. There's actually some legalities associated with that. But at the same time, it's very obviously the legacy .NET. You can't call it legacy .NET because there's all these existing customers who are like, wait, it's legacy? That means it's going away? And we're like, no, it's literally supported infinitely.

as like you pay absurd amounts of money to microsoft of course the software is going to be supported but like also if you want to run in all these new services that are lighting up in azure or whatever like yeah you're gonna have to migrate because those things are not just like windows boxes run in the cloud like these are linux hosts these are you know you know the whole world of like cloud development is like clearly not windows focused and so huge journey

So I worked on our languages specifically on the F sharp language and the C sharp compiler quite a bit. And then IDE tooling in Visual Studio. So and also our family of like Visual Studio Code and like there were other cross platform language server type stuff. And this is keep in mind also when the concept of a language server was still being actively developed. At this point, were you an engineer or were you a PM?

I was a PM, but PM was kind of weird in that group. We wrote a lot of code. We did a lot of prototypes. It was not uncommon for PMs to actually just implement entire features or do the initial architecture for something because we needed someone to not be tied to the current shipping schedule and just go and explore, see what we can do. That's the kind of PM I'm trying to worm my way into being you.

Yeah. For someone that's outside the sprint cadence, you're like, nah, I can do whatever I want. I was like, can I contribute upstream? And they're like, if you want. And I was like...

That's where the night job kicks in. I do really recommend it. I mean, I think like as an aside, in terms of like product work, there's a lot of literature about how to be a good PM and how to do good product work and stuff like that. And I think a lot of it ends up getting over indexed on and a lot more time gets spent on things like product discovery when you just kind of have...

your hands in all the things at all times and like you're constantly interacting with your users and all of that like you can spend shockingly less time on like so-called pm work and still have like a pretty clear set of goals and and then develop just a lot more empathy especially in developer tools like your users are writing code they're doing you know xyz stuff and it's like

somewhat of a sometimes controversial thing to say like, Hey, you should do that too as a PM, because that's just the reality that your users are living. And what better way to understand their problems than to literally experience some of their problems yourself and be like, wow, this sucks. Maybe we should like fix this. That's why I feel like I want to stay like

as technical as possible and using the products that I'm a PM for because I don't want to lose the empathy for developers. Intuition is way better than data. If you're data-driven, you're like, no, no, I know how this should work because I do it. It's so much more powerful than saying, here's why and I will show you why. I'm not going to lie, but I'm so excited to write code for fun and for an experiment and not in production. Nice. I'm back.

Not being on call is a magical experience. Phillip, so like out of college doing this PM, like PM work is usually like more of a leadership type role for like a pretty big sort of migration progress. Like again, making legacy makes money, right? Like that's the, like now you're like, oh, that legacy thing makes a ton of money and we need to make it make new money.

is like a big responsibility straight out of college it is uh and this is i think one of the places where i think microsoft or at least certainly microsoft in the era that i was there uh is really a great place for growing pms because they do throw you into the deep end but then they also treat you like someone who is capable of swimming in the deep end right and so

Like occasionally someone, you know, it's not the right role for them or like, you know, they frankly, it was like a little too much or something like that. But like the majority of people who I work with, they're like fully intellectually capable of like handling all of those kinds of things. And there's obviously like a learning curve.

and a pretty good structure to make sure that you don't make a horrifying mistake. But it's a great way to learn really quickly. And I found that since I switched over to Honeycomb into the startup world, there's this meme about people in big tech can't work in startups because they're too used to too much structure and too much stuff around them and yada, yada, yada. And I'm like, no, we were like feral children left to just try to make these things work.

with users in the millions. And like the concept that we have users in the dozens right now for this thing is like not necessarily easy mode, but like easier in some ways and like harder in some other ways. The small scale impact is amazing at it. Like this is my first startup too. And coming from Amazon where it felt like the PM group was also thrown into the deep end. But at some point, like Amazon was kind of putting their foot on your head.

And you're kind of like, wait a minute, I'm trying to swim here. What's going on? And then at a startup, it was just like, oh, I have all the same responsibilities, but I have none of the process overhead of like, oh, I have to go talk to 18 people before I can do that one thing that I know is the next thing I have to do.

to do. I'm really excited to be working as a pin for Linux at Microsoft because in the insecurity realm because it's so new to Microsoft that it's got that startup vibe which was very much key spaces at Amazon and like you have get to do all the things and try all the things but like under the shell of corporate America. Yeah I mean security is really new for Microsoft. I

you were gonna throw shade at me? Why are you like this? Also, I've decided Philip is like our people because did you see him give his whole background?

with like just a little bit of honesty and truth and shade but then also like like you we're right here you can have a good experience and have some shade it's fine you can absolutely but it's being honest and people being able to trust your opinion and your experience is your experience and no one can take that away but when people aren't honest and they'll just say all the good things and they'll just like blow smoke up like at you you won't trust them and their opinion on anything technical like that's how you lose all your credibility you know

.

what's up friends i'm here with kurt mackie co-founder and ceo of fly as you know we love fly that is the home of changelog.com but kurt i want to know how you explain fly to developers do you tell them a story first how do you do it i kind of change how i explain it based on almost like the generation of developer i'm talking to so like for me i built and shipped apps on heroku which if you've never used heroku is roughly like building and shipping an app on vercell today it's just it's 2024 instead of 2008 or whatever

And what frustrated me about doing that was I didn't, I got stuck. You can build and ship a Rails app with a Postgres on Heroku, the same way you can build and ship a Next.js app on Vercel. But as soon as you wanna do something interesting, like as soon as you wanna, at the time, I think one of the things I ran into is like, I wanted to add what used to be like kind of the basis for Elasticsearch. I wanna do full text search in my applications.

You kind of hit this wall with something like Heroku where you can't really do that. I think lately we've seen it with like people wanting to add LLMs kind of inference stuff to their applications on Vercel or Heroku or Cloudflare or whoever these days they've started like releasing abstractions that sort of let you do this. But I can't just run the model I'd run locally on these black box platforms that are very specialized. For the people my age, it's always like, oh, Heroku was great, but I outgrew it.

And one of the things that I felt like I should be able to do when I was using Heroku was like run my app close to people in Tokyo for users that were in Tokyo. And that was never possible. For modern generation devs, it's a lot more Vercel based. It's a lot like Vercel is great right up until you hit one of their hard line boundaries and then you're kind of stuck. There's another one we've had someone within the company, I can't remember the name of this game, but

The tagline was like five minutes to start forever to master. That's sort of how we're pitching Fly is like you can get an app going in five minutes, but there's so much depth to the platform that you're never gonna run out of things you can do with it. - So unlike AWS or Heroku or Vercel, which are all great platforms, the cool thing we love here at ChangeLab most about Fly

is that no matter what we want to do on the platform, we have primitives, we have abilities, and we as developers can charge our own mission on Fly. It is a no-limits platform built for developers, and we think you should try it out. Go to fly.io to learn more. Launch your app in five minutes. Too easy. Once again, fly.io.

you

All right, Philip, here's one honest thing that I've wanted to know for a very long time. Oh, no. There was a .NET fork called Mono or Mono, right? It was .NET. It was its own open source thing. How was that treated inside of Microsoft being on this .NET team that was trying to get there and they were going from the other side? We made this .NET native thing on Linux. Was that like, don't ever talk about it? Was that like we are going to beat them and consume all their stuff? How was that treated?

Surprisingly, pretty well. So at the time, Miguel, he's now a good friend. Shout out to Miguel. You should talk to him sometime. He's amazing. He was the lead of the Mono project. He founded it when it started and then helped create Xamarin with Nat, who has since kind of came like CEO of GitHub and all that. But they were very much set out to focusing their runtime on...

on making it work as well on client libraries as much as they could. Once mobile hit, Miguel was attached to it and he's like, "I love this form factor, but I want to use C#." That's just the way it's going to be. When you make a lot of optimizations for mobile form factor, which is great,

Unfortunately, that can often come at the expense of really good performance server side for just like the constraints are just fundamentally different. Like, you know, when you have like a backend service where GC pressure really matters when you have a couple million requests per second or something like that. It's just fundamental behavior, the garbage collector that needs to, you know, work a certain way compared to how you would want it to work in like a battery constrained device doing certain things.

Obviously, lots of similarities. There's elements of systems programming going on here where there's good code written in both places. But very often when you would try to host a monoservice somewhere, it would just fall over and you'd be like, Oh, is this a joke? It's like, no, it's not actually. It was just...

made for something completely different. And so we had this kind of complex relationship where this is before the acquisition of Xamarin where we were building our own thing and targeting the back end with ASP.NET apps and saying, okay, we want to make this like, you know, a Go developer should be able to pick up our stack and be like, oh, great. This has the same perf that I care about. That was sort of like the target we were going for. But on the Xamarin client side, we're like, okay, this is just a fundamentally different runtime and actually even different like set of libraries. And then we worked with them to start

unifying some of the library layer, because there's just all the different utilities in the .NET standard library that like, it really don't matter where it runs. But like they had their own version of this and we had our own version of this. And we're like, okay, this is like, regardless of like what degree we work together on, we all agree this should just be a standard that we all consume. And who owns the code? Like, we'll figure that out. It's an open source projects all around anyways. And so there's actually a lot of collaboration on that.

that side of things. And then eventually, strategically, I think they were probably aiming for this at some point. We acquired Xamarin, at which point we're able to unify significantly more. And then we actually were able to go on a very big technical project where we unified the runtimes entirely. And we're able to actually do that quite successfully to the point where it's actually the same singular runtime that can run on...

a client device or on the back end. And they have excellent performance characteristics. And it's actually like different parts of the runtime will activate depending on the environment. And the garbage collector has five or six different modes that it can operate under. And it'll have different behaviors depending on the kind of thing that it's packaged into and stuff. And this is all something that was architected and designed for. It was pretty wild. Do you know if other languages do that?

Is that something that's common? I know that in Java, they do that because they have many different runtimes for different purposes. However, you do need to... It's sort of the burden is on the developer to deploy the appropriate runtime for their thing. Whereas in .NET, we architected it. So it's just one thing that you deploy to whatever...

device or server or what have you and like the right things activate in the right way it also has to do with a lot of the garbage collectors there's different garbage collectors and you need to tune them in different manners depending on what you're doing so it's possible but it's not as easy like you need to know what you're doing or you need someone to know what they're doing or to do the research and do the paint but once you get it running it's very very effective and the new

Changes they've made to the garbage collectors and like Shenandoah and all the different like projects make it very, very effective. Java's gotten progressively faster in the last couple of runtimes. So they've done a lot of really good work and a lot of like the profilers and things to get the information, like the new projects that are coming out or have made a lot of advances in such a legacy technology.

Oh, yeah. Like Java, it's an impressive engineering system that they have, that they built with just all these really smart people there. And like, obviously, like the language side is also evolving with all these like great features and stuff. But like kind of to your point, Java's like low key, really,

really good with perf and like i think there's this association a lot of developers still have with like oh java is like slow or whatever and it's like well yeah in 2005 well not just that but lily put in just a lot of the things that are like coming out that are like

Usually, as it gets older, people are just maintaining it. But the things that have moved forward in the last couple of years in Java are really impressive to be at this stage in technology to be just making it so much more efficient. And to be able to bridge the gap, I don't think we're ever going to get to the point where it acts like it's something that doesn't need to be compiled. But the gains that we've made in the last couple of years are so impressive that it's getting competitive if you don't want to

write in C++ or you don't want to have to wait. I remember at Amazon, because it's like the problem with Java is like no one ever upgrades their Java, right? It's just like, oh, well, the perf problems of 2005. Oh, don't give me PTSD. Stop. Are still because people are still running it from 2005, right? Like I'm still on Java 9 or something like that. But I remember at Amazon, they had this whole like shim layer that they built in. And I'm pretty sure there was a blog post about this where they're like, well, we're not going to rewrite all our Java, but we are going to compile onto the new

virtual environment for like where the actual runtime with the compiled VM runs and they saved like millions and millions of dollars because they're like we just got better performance. The performance literally paid for like

millions of dollars. All the engineering time plus more. Yeah. It was crazy. Just like we just put a shim. We didn't rewrite the code. We recompiled it, put the shim. During the pandemic when they were firing other people, our department kept growing because that's how much money optimized. Like it got to the point where they wanted to rewrite everything in rest, but Java got so much faster that they could prioritize what they wanted to write in rest. And Amazon does a lot of things wrong in open source, but the way the Coreto team is built is,

And how they maintain and they contribute up open source, like upstream is just phenomenal. They hired a lot of very smart people to do a lot of amazing things. And I love that it's contributed back and it's not just taken for Amazon. Like I think if they ever do anything well, they did it for that team.

So fast forward a little bit here. Let's get out of the .NET's era and at Honeycomb. That's baller though because .NET was like, you know what I mean? Like to do that all on one runtime. Like I feel like you said it, but like you didn't say how like –

That is amazing to run mobile and for it to be able to do the decision or not decision making, but like to do the change for you to know how to like run on that. Build that into the language. Don't make it hard for the developers. Like that is just like, I don't feel like you gave yourself enough credit in that statement. That is impressive. Well, it was hard work of a lot of people. But yeah, I think like we had some pretty ambitious goals and we kind of nailed them.

And I'm really proud of that. Yeah, you just talked about it. Like, you just, like, made the soup of the day. Like, you're doing engineering magic over here. Like, what do you mean? And then you leave Microsoft and you're over at Honeycomb now. You went straight from Microsoft to Honeycomb? Yeah. So it was kind of funny because after we basically, like, one of our major success metrics was, like, well, how many users are using the new app?

net. We reached about 5 million and that was sort of like a major benchmark and we hit it. And I just had a conversation with my PM director and

And I asked him, so what's the next big thing? I mean, this was really ambitious. And we knocked it out of the park. What's the next big one? And he's like, I don't know. And that was because he was switching jobs over into another department, offered a VP position at Microsoft. I'm like, oh, that's why you don't know. Okay. Well, I guess maybe I'll look around too. Because I really liked the team that I was on. But you complete an ambitious arc.

And you're like, OK, well, let's kind of look around what's out there. Maybe there's something new. There's clearly a lot of interesting stuff going on in developer tools outside of programming languages. So like looking up stack, what are the things that are there? So I ended up talking with some folks at Honeycomb just because I didn't know anything about observability at all.

But I was aware of Liz and Charity, and I was aware of a general consensus among some of the .NET developers that I knew that Honeycomb was doing things the right way or the better way or whatever. And I'm like, okay, cool. Let's explore. What's the worst that can happen? It's not for me? All right, cool. So chatted with them, kind of did a whole interview loop, really liked the people and everything.

was sold on some of the initial vision because Honeycomb came out before OpenTelemetry, but it was still very, very early stages when OpenTelemetry was formed. And Honeycomb as an observability tool is fundamentally different from the rest of the market. Kind of like

has this ambitious goal of like, we really want to help developers reshape, like fundamentally the way that they introspect their systems. And, uh, like this, this gets from like super high level, like how you do an analysis workflow. Um, uh,

You know, instead of saying like, oh, I'm going to look at my logs and then I'm going to go look at traces that correspond to that and see if I can find like a match and like, oh, I have this metric that says it's up. All right, cool. Are there any logs that relate to like this time range or something? Kind of a broken debugging flow compared to what we sort of do. And that kind of attracted me.

But then at the same time, they're like, well, there's also this whole other thing with open telemetry where regardless of our product shape or what we think people should be doing best, there's this open standard that's evolving that is ambitious enough that people... It captures the majority of what people care about. And we have some pretty big customers who are like...

them continuing with Honeycomb is pretty contingent on us being quite deeply involved in the project. So one of our larger customers, quite literally in clear contract terms was like, "You need to be significantly involved here because we're taking a bet on standardizing on OTEL and we're taking a bet on using Honeycomb across all of our teams."

I do not want one of those pillars to falter in some way. So I trust that you care about your business, but do you really care about OTEL? Figure it out. That's sort of what the role was scoped to was, okay, we're going to take a bet on OTEL. Great, we did it. Mission accomplished. But what is that bet?

"What are we doing though? Where should we point our time and our limited number of engineers we have? Should we hire for this? For how many? What are the most important things to invest in? What are the major problems that people have?" All the big stuff. And I was just given the mandate to go and shape that into like, "Where do we point the vector of our force vector, if you will? And how do we keep doing that? And then how do we start A,

A, trying to start skating where the puck is going of like the project is evolving in this way. Can we make sure at least at a very minimum, does the honeycomb product not like get destroyed by this thing in some way? We don't want to be caught by surprise in some capacity, but then also B, okay, well, we do have a perspective here. How much of our perspective on observability aligned with the overall perspective of open telemetry?

How do we make these things come together in a way that fits very well and works for everyone involved? And how do we ultimately demonstrate that we are leaders in the project instead of just consumers of this open standard? That was sort of like the main thing. And that's what I did for about three years or so.

And being a startup, I kind of had my hands in all kinds of different places. But it was always very fundamentally rooted in being an OpenTelemetry maintainer, which I've been since... Gosh, I think like the end of 2021. And trying to fix all kinds of problems that like, oh, someone's trying to use OpenTelemetry plus Honeycomb, plus this language, plus this environment that they want to run in. It falls over in this way. Okay, great. Like, is that...

a legitimate bug? Is that a documentation problem? Is that like they wanted to use an API in this way and they couldn't figure it out? And like we need to add a more convenience API to something, just make it like more obvious for someone how they accomplish something. Just multitudes of that kind of work over the course of the years and just being much more involved in the project and trying to shape like

you know, what are the things that we try to make good in this big open standard that, you know, none of us have the time to like really work a hundred percent on all the time. And yeah, I don't know, kind of like these stuff, but did that still doing that. Frankly, you guys can't see it, but Philip made the best face. It,

It made my whole life. So in fall of 2022, or like late summer of 2022, I'd sufficiently been at Honeycomb long enough that I kind of got the whole breadth of like, what does the customer journey look like? What are people struggling with? Like when they onboard versus when they're like on day two.

500 of using the product and what are their struggles when they're trying to onboard other teams in their organization? Because like, you know, this one group buys Honeycomb and they have an awesome time, but like that doesn't necessarily mean that this other engineering team that they work with is also going to have a great time. And like their challenges might be slightly different. And I came to the conclusion that,

that our products has all of these things that people want to do where the answer lies on some probabilistic distribution. A very concrete example that ultimately turned in one of the features that I helped build was people come in saying, hey, I want to query something in this way. I care about this information.

Well, there's possibly hundreds of queries that could technically work for what you're trying to do. And there's no single one that's guaranteed to be the right one. If you say slow requests, OK, well, there is a way to technically measure that. But then you get into, OK, we have all these different aggregators you care about.

an average, you care about a P80, a P90, a P95. Like, do you not actually care about any kind of aggregation? You just want to see a count of some kind? Do you care about like a max and you want to see like the maximums? And then when you say, okay, slow requests, but with respect to what? Is it with respect to like a particular component

call to an HTTP route or a call to a database if you have multiple database. There's so many different ways to potentially answer this in terms of a query and nothing in our product was like, "Hey, here's how you do that." It just all assumed that you knew how to shape

what you cared about in the right form and the right kind of query to sort of do that. And this is just something that, frankly, is still a problem with Honeycomb. The natural language querying thing that we built in early 2023 is just a step in the direction of helping people there. But I kind of wrote this document that was like, "Hey, there's all these areas in the product where the solution lies on some probabilistic distribution.

And there are like parts of that distribution that are likely to be more useful than other parts. And that is squarely the domain of machine learning. So like we should investigate, we should explore, we should experiment, we should try stuff, ship to learn, see what happens, around and find out or sorry, fork around and find out. We got to keep Philip forever. Like he has to come back. He is our people.

Okay, friends, I'm with a good friend of mine, Avtar Swithin from Timescale. They're positioning Postgres for everything from IoT, sensors, AI, dev tools, crypto, and finance apps. So, Avtar, help me understand why Timescale feels Postgres-ish.

is most well positioned to be the database for AI applications. It's the most popular database according to the Stack Overflow Developer Survey. And Postgres, one of the distinguishing characteristics is that it's extensible. And so you can extend it for use cases beyond just relational and transactional data for use cases like time series and analytics. That's kind of where Timescale, the company, started, as well as now more recently, vector search and vector storage, which are super impactful for applications like RAM

recommendation systems, and even AI agents, which we're seeing more and more of those things today. Yeah, Postgres is super powerful. It's well-loved by developers. I feel like more devs, because they know it, it can enable more developers to become...

AI developers, AI engineers, and build AI apps. - From our side, we think Postgres is really the no brainer choice. You don't have to manage a different database. You don't have to deal with data synchronization and data isolation because you have like three different systems and three different sources of truth. And one area where we've done work in

is around the performance and scalability. So we've built an extension called PG Vector Scale that enhances the performance and scalability of Postgres so that you can use it with confidence for large scale AI applications like RAG and agents and such. And then also another area is coming back

to something that you said, enabling more and more developers to make the jump into building AI applications and become AI engineers using the expertise that they already have. And so that's where we built the PGAI extension that brings LLMs to Postgres to enable things like LLM reasoning on your Postgres data, as well as embedding creation. And for all those reasons, I think, you know, when you're building an AI application, you don't have to use something new. You can just use Postgres.

Well, friends, learn how Timescale is making Postgres powerful. Over 3 million Timescale databases power IoT, sensors, AI, dev tools, crypto, and finance applications, and they do it all on Postgres. Timescale uses Postgres for everything, and now you can too. Learn more at timescale.com. Again, timescale.com. ♪♪♪

So did Honeycomb create OpenTelemetry? Or was it like an open source project that they ended up adapting? Or did they start it as an open source product? Was it started as an actual product for Honeycomb that wasn't open source and then it was open sourced? How did all that come about? Yeah.

Yeah, OpenTelemetry was founded by several folks. I don't think, well, I don't remember if Liz was on the founding group or not. I don't think she was, but she was part of the initial governance committee. But so basically, there were several open source projects like OpenTracing, OpenCensus, Jaeger, Zipkin, all solving various flavors of the same problem.

but not quite completely enough to the point where there needed to be another project spun up to do something in a slightly better way. And so people who worked on all of these things and also other folks who work at Splunk or at the time, I think Morgan was at Google and stuff, but several of these folks, and they all got together and they're like, hey, we're all solving anywhere from 50% to 75% of the problem space that we need to be solving for. And we're all doing it independently. Let's all get together and

and go to 100% of the problem space that we need to solve for together as one standard. Because a million different standards that are slightly incomplete for certain use cases that people have is like, "We're not going to grow." And the world of proprietary instrumentations from all the other vendors is just going to stay there. And then that has its own set of negative consequences that organizations actually do not like.

But we're not meeting them where they are. So let's get together and do that. There was this consortium in about 2019 where they did this. I love that you got that many engineers and people good at things. It's like for them to say, okay, our projects maybe need some improvement. We should work together. You got people that are engineers in a room to say that out loud?

Watching this all happen from the outside, right? Like at the time, like I was just observer of like, why do they keep changing the names on all these things? Because open to hotel came from something else that was open something. I don't remember what it was, but like, I remember watching Jaeger and all those things that were like all these like little projects spinning out of big companies and they all were like doing bits and pieces here and there. And then like, it all just kind of like got,

got together and the companies didn't want it. It didn't see all the companies were like, this is actually how we make money. Our instrumentation, like the Splunk instrumentation in your app is what makes Splunk sticky. And if we can make that sticky, then we have a competitive advantage. And then once we're like, oh, we can just democratize all of that collecting because at the end of the day, like it's all basically the same kind of stuff and just a new flavor, a new API.

and some of them didn't have support for different languages so we're like oh yeah if you want if you want Scala you got to go over to this one and if you want Go you have to go to this one and it was like this huge thing like why don't we just collect all of that stuff and so from the outside it was interesting just watching it all kind of collapse into this oh one group is going to have

all of the at first I thought it was just Tracy open Tracy was like the big thing like we're gonna get all the languages together and we're just gonna do the instrumentation and then it was like oh well we're also gonna do the metrics oh we're also gonna do the laws oh we're also gonna do it like oh it's not like a full stack thing of what was going on every time I'd listen to charity or Liz talk like it doesn't matter what the data is you have to be able to like

Look at it. You have to be able to understand what's going on. And the difference between collecting data and understanding the system are two different problems, which I think is fascinating what you're calling out here, where it's like, I can't just have all of the data in the world and all of a sudden understand, oh, this is where the problem is. Like, no, you have to know where to look. And someone, like you mentioned, like someone has to help you get there.

And there are probabilities in those systems. Not just that, but you need to be able to get the bigger picture of the data. Like people collect data all the time and they have no idea what to do with all those logs and what to do. You just keep zooming out, right? Because when you have a log, you're like, oh, I'm going to print here in my code. I'm like, oh, I found the piece, right? Like I'm terrible at GDB, so I'm just going to do this one print and I find it. But like once you zoom out to like, oh, that was 10 things calling it. Okay, how do those things call? Is this network related? Is this DNS? It's always DNS. Yeah.

And then just keep zooming out. You're like, okay, now how's the application? What does the customer experience look like? Like Honeycomb seems like they always approached it from that side of it. It was like, if you're, what's charity's line? If your customer, uptime doesn't matter if your customer has a bad day or something like that. Like if your customer is angry, then it, all these,

All the uptime in the world doesn't matter. And so being able to just go back to those basics, but as an engineer, where do I look? How do I look there? How did you start building that ML, AI infrastructure stuff to get people in, to nudge them in the right direction?

Yeah. So we approached this from a couple different ways. I was lucky enough to be able to go to several different conferences and work with our sales engineers on just doing demos for people and seeing what's the ideal story that we tell people. And I got to do all these demos and stumble through them and eventually get my own storytelling craft down with respect to...

poking around with what these people care about and then trying to map that to like, okay, we have this system. So I'm going to show you the system. It's like this Ecommerce site. And I could show you some metrics showing uptime or memory use of pods or something like that. But that doesn't tell me anything about the business problems that this set of services solve. There are real users. There are the

the concept of a shopping cart and some users fill the shopping cart with more things than other users. And there are certain aspects of latency or reliability errors, what have you, that matter more in some contexts than other some contexts. So like...

Of course, it's important when you have low latency. You should have low latency all the time. Yes, we all agree. But it's especially important if someone presses the checkout button when they've added a bunch of stuff to their cart. That thing had better not be slow and it had better not have an error. And real systems being as complicated as they are, there's always going to be slowness and errors inherent in everything. But some things matter a lot more than others. And can you...

track that kind of stuff in a way systematically, such that you can start with a high level view, and then always start from like a high level entry point that is a combination of broad, but yet also opinionated enough that allows you to narrow down toward a particular characteristic that does not require you to know upfront where you need to start narrowing down on that

That's the fundamental thing. Imagine you're a new engineer brought into this Ecommerce site. You don't know how any of this stuff works, but you have to fix a bug because you got paged and you're now on call.

Do you want to be in a situation where you have to know the exact lines of code that you should be watching out for? Well, no, that sucks. That's going to take forever. You're not going to be able to solve that. Or can you start with a high level? Okay, well, let's say we do start from this notion of slow requests, but it's based on endpoints. Okay, I can narrow that down to there's this one endpoint that's weirder than the other. That's probably...

a breadcrumb to sort of go and follow. I'll go with that. Okay. Well, like what's interesting about like a full distribution of latencies about this? Oh, I see. There's like this weird little spike going on right here. I wonder what that could mean using like the honeycomb tool. We have this anomaly detection thing where you can like, it's called bubble up, but I think now we're,

At any rate, you do this thing called a bubble up and you can sort of like visually select this like part of a graph that looks a little different than the others. And then it'll just automatically compare all of the events sort of in the selection versus what's not in the selection and splay out via like literally like histograms, all the values of each of those events and all the values that correspond to all the columns in each of those events and sort them in a way that you can say, oh, well, there's these like...

Five columns and these like handful of values that associate with these columns in my data that are actually the characteristics that associate most with this spike in latency that I see here. But I didn't have to know up front that like, oh, there's this one attribute in my data that is the thing that I should be looking at.

And it's this sort of generalizable thing. It's this thing that works... When people watch this flow, they're like, wow, this is how I actually do want to be debugging. Because when I get onboarded onto a thing, I don't get onboarded onto what every line of code does first. I get onboarded onto like...

What is the purpose of this thing? What should it be doing? What matters the most to our business? And like, this is how like, I mean, frankly, most organizations should probably be working anyways. It is, and they're not. There's a whole other thing there. But this whole flow that I'm talking about

A, it's a little bit fuzzy. B, it's a little bit probabilistic. C, sometimes you get into a dead end, you got to back out. It's like, you know, you're not always going to be 100% right every time, but it's fine, right? Like you're exploring, you're debugging. You're also still learning about the process while doing that though. So it's not valuable time wasted.

Right, exactly. In the course of doing that, there's sort of these like ever more slightly opinionated and narrow views of the world that you get as you as you sort of go through the next step. And so we said, okay, what are ways that we can start replicating that in a way to get people past, you know, empty page? What do I start with?

How do we get them onto a track where it's easier for them to start narrowing in on this particular kind of debugging? And that manifested as like, okay, well, when we talk with a bunch of people, they can very often express what they want to start looking at. They can just say the words, well, I care about slow requests in my system. Awesome. Cool. Like, what is a good query for slow requests in your system? Well, it depends on your data. Oh, crap. Okay. Well, we can't like literally give you the single query for that. However...

an ML model can look at the shape of your data and your schema and have some examples of other shapes of data and other schemas and say, "Well, in those shapes, slow requests can mean this kind of query." Given this shape that is like

kind of similar to these other shapes that I have in my example set, well, I can create a query that is probably going to show you something kind of like that. And so there's degrees to this where it can actually work very, very well, especially if you're using OpenTelemetry and there's a lot of common names for certain kinds of operations. But we found that GPT 3.5 back in 2023 did a shockingly good job of being able to say, okay, well, if I have this named pair of

of I have a natural language query-- or named tuple, I should say. I have a natural language query. I have a list of columns that are considered relevant for this natural language query.

And I have a query shape, which is like a JSON object that follows a particular schema. And basically said like these three things, like that query shape matches the two other things that you have there. It can then look at your column names and say, oh, well, you measure latency with a slightly different name for a column, but like the name of it

implies something to do with latency. And so that's probably a good candidate that we can pull in as context. And then just through basic few shot example prompting, you can actually just sort of trick GBD 3.5 into saying, oh, well, my job is to output a JSON object that fits this particular schema. And in the system prompt, there's some rules about what makes for the right JSON object and stuff. This is something that I experimented with in April

of 2023. And in the course of doing that experimentation, I found, wow, this thing actually does output really good stuff. And I know that there's going to be a very long tail of like, it might not get something quite right, but like this shifted from,

is this even possible to, oh, this is definitely possible. And this is probably also going to be useful for several people out of the box already. But like, what can we do to make it really useful to as many people as possible? And that's kind of like where my mindset shifted. And that's how the thing ultimately started into building our natural language coining system. You know what I think is really impressive about everything that you've said? First, the fact that

Yeah.

Like, I like love going to the different like booths and talking to their engineers and talking to their PMs and the disconnect between their engineers, their PMs and their customers is wild. And then they never have the right information about the competitors that are 10 feet away from them. And they could have easily went to the like booth and asked a question and it blows my entire mind every time. Like that is the best part about being a solutions architect or PM or whatever you want to call it. Just being able to compare the product and see like,

Where like half the time there's low hanging fruit that would be so easy to fix or to like know more about to make your products better. But also like you're using AI for something that's going to actually make people's lives better and not something that we didn't ask for. Not in a way that you're handicapping your developers. Like a lot of times, like just adding to like IDEs, you're teaching new developers like

bad habits that they don't even know they're getting and you're not enabling them to get the skills that they need to troubleshoot and to grow as engineers right but you're using it in a way that actually teaches you how to debug and what the process is and to help them learn processes which makes engineers better so like I think like I'm so excited about this I want to go play with it because like

I think we're going to handicap so many engineers like in this next generation with different AI because they are only like, it's like that one meme and it's like, why hire a software engineer if they just Google? Like why give them six figures to just Google? And it's because you know what to Google, you know what I mean? And we're almost taking that from people because chat GPTs or just any GPT is going to become the source of truth and they're not going to get the like

the skills to figure out like this looks wrong or this looks right or because so much of it isn't just an error and ID. That's not all of being an engineer and putting all these pieces together. So,

Like the way that you're describing it is so exciting because you're teaching them a process. Even if they're going down the wrong, like you're for one making it faster, which we all know that people like every big company or a small company needs you to get faster from being new onboarding and writing that first commit in PR, right? But also like they need you to be effective and to learn. And you're bridging those two things, which is just amazing. Like that's the first time I've been so excited about AI because usually I'm like, oh, this sounds bad.

Like, you know, that's actually like amazing. Yeah, yeah. And that was definitely the goal. And like, we have much more ambitious goals around this thing as well around like that we have gotten feedback from people where like they do want to drive their querying process end to end using natural language. And you can kind of use query assistant to do that today.

it's not as good at that. And frankly, that's just, that was a scoping exercise because the primary problem that we were seeking to solve right then and there, which frankly I think was the right problem is a lot of people would come in, try to use Honeycomb, but they'd see our interface and they'd be like, I don't, I just don't understand how I can start to use this. Like I can express how I would like to start out, but I don't know how to shape that in a Honeycomb query. And we found quotes,

pretty good success metrics where people would come in, they would use the natural language query feature a few times, sometimes quite a lot actually. And then they would start using our manual querying interface to manually tweak things and explore a little bit more without even using the natural language portion of it. Sometimes they would, and it does support that, but sometimes they wouldn't. And from our perspective, we're like, great, we don't really care

If you're using the AI feature or the non-AI feature, we want you to explore your data. And if this helps you get to the point where you could start exploring more and get more curious, like, great, that's the problem that we're solving. This is the biggest value prop that I hear you saying in everything is that you're making them more efficient, but you're like, so it's like alarm fatigue, right?

Like you're going to start to ignore your alarms if they're always going off. And like log, when you get so much log data that it's so much that you just need to delete it because you don't know where to start and it's filling up all your disk space or you know what I mean? But what you're doing right now is you're helping them to focus and to break down a problem. And that's kind of all of what engineering is, right? You take these big problems, these big systems, especially at scale, right? And you break down the problem so you can learn like everything

I feel like we always talk about observability and metrics, but it's so hard to get the good metrics and figure out what you should be paying attention to and figuring out how your system works. Whether you're new or you're just maybe one of your coworkers built something and now you have to go fix it, you know, or somebody leaves. Like, especially with all the turnover in tech right now, we are going to have these huge systems at scale,

that you might not have built. Like in college, you sit there and you build these projects from scratch, but that's not being an engineer in real life, right? Like you're going in, you're going to go work in a legacy, like code base. That's huge that you do not have the time to figure out. You're going to have to just get thrown in there. And what you're doing is you're making them efficient and you're giving them this whole process. And I just think that that is like a huge value prop of why your product is valuable, you know?

And in this time where people are really having to make those hard decisions to figure out where to budget and where to put their money, something that's going to help your engineers grow and help them to be more efficient with their time, but also not handicapping them is just like...

like fire. Yeah. Yeah. And, uh, we think that that's a, that's a good use for this kind of technology right now. I mean, you know, there's a bunch of AI hype, nonsense, whatever. I barely even pay attention to it anymore. But it's wild. Cause you're actually in AI, but like a lot of people in AI will just be like hyping it. And you're like, none of this makes sense. But you're like, it goes back to what I said in the, for bringing in the podcast, like you,

You have enough honesty that you keep it real, that it makes me trust your opinion. You know what I mean? But also your product actually genuinely sounds cool.

Yeah, I mean, frankly, that's what we're aiming for. People don't want to be replaced. They don't want things to be wholly replaced. Like people are willing to accept change. They're willing to accept, hey, this thing that I might have liked doing in the past is now getting abstracted over. There's like an engine that can do some of that. Like, I think we've seen variations that throughout history, I imagine there were probably some developers out there who really loved handwriting assembly.

And I think the majority of them have probably also come to accept that compilers are pretty great and we can use compilers so we don't have to handwrite assembly all the time. But like... It's like people who love to write Bash by hand for no reason just to like be mean to themselves. Yeah.

Yeah, I will say as far as I am so happy that I barely have to write any bash in my life. I can just go to chat.gbt and it does a good enough job. I'm like, thank you. Same thing with regexes. I find zero joy in that. I am so happy to offload that. I have like two friends who just they get great joy out of writing regex. And I'm like, I love you smart people, but I want nothing to do. Power to them.

This shows that this product will outlive the AI hype. This is like something worth investing in. And, you know, like there's always that turnaround cycle of, oh, this new thing is cool. And then you onboard to it and then you have to migrate off of it and you're just stuck in this tech debt cycle. And I feel like this is like the value prop of like why...

your product is worth giving the try to, worth investing in, and it's going to outlive this because it actually delivers value to engineers and it's not just a hype thing. It very much fits what we've been trying to teach engineers just like in general. Like you need to like be effective, break down problems like

And just kind of like you go and seek out that information. And this is just like taking what we already were doing, but to a more effective level, you know, so it's not just like a lot of times they're inventing the wheel just to do it and just to make it very expensive, but you're not, you're actually just, you're improving on the wheel. You know what I mean? It's like,

So I think that's just really cool. Yeah. Yeah. And that's, you know, we, as you might imagine, we have many more things that we're looking to build and kind of investing in our own, I guess you call it like AI team that we have. But like philosophically, we're very much aligned with sort of the same, like the thing that we went in on last year when we built, when we built the first version of this, of like, we're here to help.

We recognize that people come into the product and there's the problems that they come in with are multifaceted. And there's people of varying levels where some people want zero assistance, and that's great. The product is amazing for them. People want some assistance. Some people want a lot of assistance. Some people need a lot of assistance earlier in their journeys, and then they back off of that. Some people want assistance in the weeds of when they're actually debugging stuff. And

we are in a position to be able to help with that to varying degrees. Like if you are not in the middle of an incident, but you have Honeycomb open and you're curious about something, but you're like, huh, my data has, there's maybe like a hundred different questions that I could ask of this data, but I'm not even sure what those hundred questions necessarily are. Or like, are there like even categories of analysis that I could do?

Well, great. There's things that we can probably even suggest. Well, based off of the shape of this data here, it looks like, you know, we think there might be these kinds of hypotheses that you could go and test. Do you want to explore some more? That's my favorite part of AI, just being able to get unstuck. You don't want it to write everything for you, but you just want to know, like, when you're having that brain moment that you like are just like deer in the headlights, it gets you to like move forward. Yeah.

Yeah, yeah. Unstick you a little bit and fundamentally encourage curiosity. Because if we get you to dig deeper, get more curious, ask more questions, get more assistance that way, that makes you, frankly, a better user of Honeycomb. It's really good for our business if people...

There's a direct line that you can draw between how many users in an organization that has purchased Honeycomb are actively engaging with the product and how likely that organization is to want to purchase more Honeycomb, re-up their contract, maybe buy even a whole lot more. And it's like, alright, cool. We managed to make it such that

our business aligns on encouraging people to be more creative. And I'm like, great, this is perfect. But that also makes you better at building stuff too. Like it's not just a reason for you to keep using Honeycomb, but it's just a fundamental thing of being a good builder of things, you know? And just like another thing that...

because you're making people more efficient in ways of being able to give them suggestions and give them ways to debug. You're also like, you know, when you're an engineer, you're building something or you're trying to diagnose something, you're so pressed for time that you need to get something back up. You're kind of panicked. So when you take a lot of that panic away or you give people more time back, they have more time to be curious and more time to do other things, you know? Yeah, exactly. Exactly. How long have you worked at Unincome?

Three and a half years, almost on the dot, actually. You did all that in three and a half years? Do you sleep? I do, yeah. How much coffee do you drink? I don't, not a whole lot, but I do roast my own coffee. So you have very strong coffee and

Yeah, it's strong. And that's something like in the coffee roasting community, because of course there's a community for this. People really, really nerd out on it. It's amazing. There's all of these like curves that you can draw about like different like dimensions of the coffee and you can optimize for like a particular aspect of it, depending on like how you work, how you roast it, when you roast it, how much you let it degas or as I call them bean farts, degas its CO2. Yeah.

I love that you're nerding out about coffee the same way that you nerd out about technology. Hell yeah. It's great. Well, yeah. So, Philip, will Honeycomb help me with my computer crash? I'm afraid not. No. Different. This is not my job. I'm very sad that I missed a good portion of your conversation. I got to hog Philip. It was fire. I loved it. I'm happy for you. We had a good time.

Thankfully, I have a backup iPad right now. So, but anyway. What does the iPad run on? What kind of hardware is that iPad? It is hopes and dreams. It's Mac. You're welcome. And your Linux is down. Okay. Keep going. Which is also why I could not come into this room for like 15 minutes because the iPad wouldn't let me in. So.

Anyway, but yes, actually my computer did fully crash and I don't know if that's hardware or not because it just locked up and will not boot again. So we will, we will be troubleshooting some things later today.

But Philip, thank you for coming on the show. Thank you for explaining to all of us. We found you on blue sky, blue sky. You had a thread about your AI infrastructure. So if people want to follow you, I'm assuming blue sky is the place to, to find you and interact. It is a great place. I'm posting there a whole lot. Definitely not on Elon site anymore. So I try to post as much like interesting technical stuff as I can out there. Occasionally there'll be a fun post, but I was going to say, tell me their posts and memes and like,

just shade there because like

I've seen the faces that you made in the last hour and I know good things go on in there. Good things go on in that brain. There's going to be a good one. I actually have a blog post I'm going to write soon that's going to be called Maybe You Don't Need a Vector Database. Did you see that? Did you see his face? That was like the cutest nerdy. It's going to be so good. I'm so excited. I'm going to read that one. That sounds good. I don't think I could be excited about vector, but here we are. Good point.

You kind of don't need a vector database for a lot of stuff. You both do the eyebrow right before you say something shady and it's the best. Like you both, it just, there's one eyebrow that just goes up. And like, I, like, I just, I already know that like, this is going to be good. Like, Philip, are you going to come back when we have our new podcast? Cause like you got to come back. Yeah, absolutely. Great.

Very happy to do it. And I can finally have a full conversation with you. I'm going to hog all of that one too. It's okay. You're the one D-dossing my computer. That's why I went down. I know. I had, I had questions. Okay. Like Justin, get out of here. So, all right. Well, thank you so much, Philip, again for coming on. Thank you everyone for listening. We will have one more episode after this. So if you haven't already go find us at fafo.fm.

and we will have feeds and website stuff there for the new podcast. But, and also check out Honeycomb. I was like, I've been a fan of what y'all been doing for quite a while. So open tell us baller, like,

It's actually AI that is helpful. Go check that out, dude. I'm going to check it out. Also, you can find us all on Blue Sky. So if you can't find the podcast, find us on Blue Sky and we'll tell you where to go. The podcast is on Blue Sky too because the domains work there, which is awesome. So it's at F-A-F-O-F-M. So we're good. But I feel like it's always good that people know where to find us just in case they forget the podcast name because we all know that we're loud and there. Yeah, that's right. Thanks.

Thanks, everyone. We'll talk to you later. Bye, guys. Bye. Bye.

Thanks for listening to Ship It with Justin Garrison and Autumn Nash. If you haven't checked out our ChangeLog newsletter, do yourself a favor and head to changelog.com slash news. There you'll find 29 reasons, yes, 29 reasons why you should subscribe. I'll tell you reason number 17. You might actually start looking forward to Mondays. Sounds like somebody's got a case of the Mondays.

28 more reasons are waiting for you at changelog.com slash news.

Thanks again to our partners at Fly.io. Over 3 million apps have launched on Fly, and you can too in five minutes or less. Learn more at Fly.io. And thanks, of course, to our beat-freaking residents. We couldn't bump the best beats in the biz without Breakmaster Cylinder. That's all for now, but come back next week when we continue discussing everything that happens after Git Push. ♪

Bye.