AI assistants primarily helped people rather than displacing jobs. While there may have been some impact due to lack of skills with AI, no one was fired directly because of AI in 2024.
It's unclear if there was an explosion of AI certifications, but the demand for AI-related books suggests a growing interest, which likely extended to certifications as well.
AI is the most lucrative area because of the massive investment and rapid changes in the field. The constant evolution of models and the high demand for AI-related content make it a profitable focus.
Platform engineering is the dominant focus within the Kubernetes space in 2024. While Kubernetes hasn't moved to the background, the way it's interacted with has shifted towards platform engineering.
Some large companies are moving back to on-premise solutions because they understand the cloud better and believe they can achieve a higher return on investment by managing their own infrastructure.
WASM struggled in 2024, with no clear use cases emerging. It remains a niche technology, and its relevance is questionable as AI dominates the tech landscape.
One unexpected event was IBM's acquisition of HashiCorp, which was surprising given HashiCorp's recent public listing. This move was not anticipated by many in the industry.
Tech investment in 2024 was heavily focused on AI, with other areas needing to prove their profitability. IPOs and Series A funding were scarce, except for a few notable exceptions like Klaviyo and Laravel.
Within Kubernetes area, platform engineering is without doubt the thing. It's where we invest effort, time, money, everything, right? Not everything, but more than... I don't think that there is a bigger area right now than platform engineering. This is DevOps Paradox, episode number 294. Looking back on our 2024 predictions.
Welcome to DevOps Paradox. This is a podcast about random stuff in which we, Darren and Victor, pretend we know what we're talking about. Most of the time, we mask our ignorance by putting the word DevOps everywhere we can and mix it with random buzzwords like Kubernetes, serverless, CICD, team productivity, islands of happiness, and other fancy expressions that make us sound like we know what we're doing.
Occasionally, we invite guests who do know something, but we do not do that often since they might make us look incompetent. The truth is out there, and there is no way we are going to find it. Yes, it's Darren reading this text and feeling embarrassed that Victor made me do it. Here are your hosts, Darren Pope and Victor Farsen. Victor, here we are, one week away from Christmas of 2024.
And because of calendars, we are doing our prediction review episode now instead of after the week because the Wednesday after Christmas is actually 2025. So we're doing it early. So this is an early Christmas present. Or maybe it's coal because I think it's going to be coal based on what I prepared. Are you referring to last year's predictions? We are referring to the 2024 predictions, yes.
Okay. Is there at least one that is getting close? I don't think so. Well, part of one may be getting close. Let's just get started with it. One of the things that we said was AI assistants will start displacing jobs. Possibly, probably, partially true. There's a lot of Ps there. I mean, I don't recall. Was it for this year, really? I mean, eventually they will start displacing jobs. It didn't happen this year. I mean...
Helping people, yes. I don't think that anybody was fired so far because of AI. Maybe because of lack of skills with AI, maybe. Lack of skills with AI. One of our other predictions was that there was going to be an explosion of AI certifications.
That's impossible to verify. I'm not following certifications. I have no idea. But I would be very surprised that there was no search in them. I know for books, for example, which I also don't follow, but hear about them
over the dinner or whatever, there is a search. I just cannot confirm for certifications, but I would be very, very surprised if there wasn't. Especially since all those publishers, they're always hungry for, hey, can you come up with something new, right? We have 57 books already about Kubernetes, right? Or whatever was before that. So I would be very surprised if we were wrong on that one. Well, okay, maybe partially wrong. But let's stay there for just a second.
publishers would reach out and it's like, Hey, we need a book on X. And that X probably had a shelf life of 12 to 24 months. Would you say? Depends what X is. If it's very, the closer it is to cutting edge, the shorter shelf life is. Right. So that's what I was going to say about AI. AI, I think is probably the shortest cycle we've ever seen because it keeps changing so often. Like we get new models, but,
It feels like every day. True, but I would guess that it also sells much better, right? Probably what you can sell on a topic of AI in books is within a year is probably worth like 10 years of selling something about C and I'm being very gracious towards C right now. You do need C in order to create models. Oh no, you only need Python. Nevermind. You need Python. Here's one point I think we did get right. If you want money, go AI.
Oh, yeah. Oh, yeah. No doubt. I think that's the one that we got right out of everything else. I still think that we got the previous ones right. Okay. Well, we're all going to be sick of AI? I am. I cannot speak in the name of everybody else. And then locally running models. Could we have done locally running models, I'm thinking primarily Ollama right now, prior to having...
ARM. I'm thinking specifically Apple Silicon at this point because that's the ecosystem I'm in. I mean, Intel and Broadcom are rushing also towards being able to run some AI something, something
on your laptop. So it's probably getting there now whether they will bypass Apple. That's an open question. And not just what I consider local, but I'm going to consider like digital ocean local. Like we can get these smaller providers and just spin something up. Cloudflare, all those guys, even though they're big companies. Oh, yeah.
It's not like we have to go and build our own model and jump into the Azure or AWS or Google ecosystems. I mean, self-hosted, right? Self-hosted, yeah. Yeah, that's a thing, definitely. Especially for bigger companies. I hear all the time, kind of, yeah, we cannot do it because, you know, Azure, AWS, Google Cloud Service, nobody knows where our data goes.
The solution is obvious for those companies, not for everybody. That's self-hosted now. Whether that self-hosted is on a machine in your basement or in AWS, that's a different story. So you're thinking we actually did okay on some of the AI things. I'm not feeling that way. Okay. All right. Well, I think there is a craziness about it, right? Nobody still knows what's it for, just to be clear. Why do we want this and what is it? We don't know yet, but we all know that we want it.
Which is very strange, right? You don't know why you want something, but you want it. It's just like being a two-year-old, I think. It's like, I want it. Just give it to me. I don't know what it is. I don't know what it does, but I want it. Exactly. So we're going through our toddler phase right now. So does that mean in about three years? Well, at least we'll be out of diapers by then, I hope. Look, the pace with which...
No, actually, it doesn't matter the pace of movement. The level of investment in AI is so huge, probably bigger than anything we've seen before, that we cannot wait for more than three years. Some of that money, one out of 10, one out of 50, one out of 100 of those investments needs to pay off. And when I say pay off, I mean start earning money. And start earning big money. That's what I'm hearing. Yeah, because you cannot...
You cannot just finance those things by saying, hey, you know how we're closer to what we normally do? How we say, hey, oh, congratulations, you got Series A investment. You got 2 mil, 3 mil, maybe 5 mil. That's nothing. It's completely relevant when we're talking about AI. We're talking about larger amounts of money, excluding those companies that actually don't create their own models. So I'm not talking about
Investments where you're creating your own models, right? That's hell of a money. Well, when you have one of the biggest companies in the world funding OpenAI through Microsoft. It's just brilliant. Microsoft deal is so brilliant. Say, I'm going to give you this amount of money. You never saw this amount of money in your life. I'm going to give it to you. But I'm not going to give you a single dime. I'm going to give you my own servers in exchange for the money I just gave you. That's such a brilliant play.
It's one way to exercise your infrastructure really fast. Yeah. We need to build infrastructure because we know what's coming in the future. So why don't we just build it right away as investment in a company we just made? Not a bad idea. Not bad at all. We're done with the AI part, I hope. So again, you think we did okay. I'm not convinced we did, but okay. It's a prediction episode. Who cares? And it's the week before Christmas again.
You shouldn't be listening to it. You should be listening to us, but not right now. You should be hanging out. You need to be putting up your tree. This next one. This year, meaning 2024, Kubernetes will move to the background and the way you interact with it will be through platform engineering. True or false? You think we got there? No, I don't think we got there, but it's not anymore a conversation whether it should or it shouldn't.
Within Kubernetes area, platform engineering is without doubt the thing. It's where we invest effort, time, money, everything, right? Not everything, but more than... I don't think that there is a bigger area right now than platform engineering. We haven't gotten there. Arguably, we might never get there, right? Because it's everything or nothing type of situation. But it's a thing. It's definitely a thing.
Now, I don't know whether that fulfills the prediction. I don't remember the exact words, kind of like, oh, are we finished by the end of this year? If that was the prediction, then we failed. But if the prediction was, is this going to be the thing within Kubernetes area, then we got it. What do you think the right thing is to do platform engineering today? I mean, we have vendors that have their solutions.
We have open source, primarily just Backstage. There's maybe others, but Backstage is the one that jumps out at me. Oh, stop, stop, stop, stop, stop. Now you're mixing platform and portals. Oh, thank you. Okay. When I talk about platform, I mean a lot of things.
I mean, services running somewhere, exposing those services through APIs, frontends, CLIs, Visual Studio Code extensions, workflows or pipelines. Many things need to be assembled for something to be called platform. And backstage is only graphical user interface part of it.
When I say only, it sounds like, oh, this is irrelevant. It's important, but it's not a platform. You're correct. It's not a platform. And yes, I was mixing things. We've got to come up with other letters to use when we're describing things because we use P a lot. We use E a lot. We need other words. It just gets too confusing. Can it be two words? Developer platform? Developer platform engineering on a Friday. Yeah, I think that'll work. Developer platform and developer portal. Does that confuse you less?
No, not at all, but that's okay. The next thing is moving to on-prem. I was thinking there was going to be a lot more, and I'm going to call on-prem meaning completely self-managed, not using one of the large hyper, not using a hyperscaler or even a mid-scaler. Yeah, like it can be collocated like, hey, you put the server, but it's my server, right? Correct. That's what I mean by on-prem. Did I also predict that?
I think you fought back against that a little bit. It sounds like something I would fight against, yeah. I don't know. I've seen some people saying yes, and some people still just headed towards the hyperscalers because they think it's a better deal. I don't think that there is anything stopping hyperscalers. So there will always be somebody who will say, there are different use cases. Let's start with that, right? There are companies who get disappointed with hyperscalers
And in a big percentage of those cases, it's kind of, he did not get it really how it works. So I'm disappointed because this is not what I expected. And when they go on-prem, I think that's also temporary, right? You go back, you'll be back on cloud. You just need a bit more time to understand what's going on. What I think is going to continue happening is that some really serious and big companies will be moving from cloud to on-prem soon.
from the perspective of, okay, now I get it. Now I understand what is expected from me if I'm not in cloud, right? And I think I can build it with a lot of people, a lot of investment, but that return of investment will be bigger than if you stay here, right? But we're talking about companies like Netflix, right?
or Spotify. I'm not saying that they're in nothing on-prem, right? But that type of experience and that size, yes, it makes sense. Equally big, but traditional companies, it doesn't make sense. If you think about it, we've been in the move to cloud space for, what, 15 to 17 years? Yeah. Going back to 2007, 2009. It's not abnormal to me that people are seeing, oh, I figured it out now.
I could cut my OPEX spend way, way down, even though it's going to be capitalized, CAPEXed. I've got space, or I can get access to space that is cheap because there's new nuclear plants coming on, and I get basically free power. Different conversation. But, you know, if you take a subset, let's say AWS has around 1,000 services. Let's say that you need a subset, 100.
It sounds a lot, but it's not actually a lot because one VPC, a lot. 50. Building 50 services like that on top of hardware, that costs a lot of money. A lot. So yeah, you might be saving money by going on-prem, but maybe after some years of losses first. You will be spending more than in cloud if you go on-prem.
That's, I feel, the message, right? Assuming that you want to end up with a similar quality of service. So now there is a limited number of companies who are capable of long-term planning and saying, you know what, I will be paying more to AWS because years from now I will be paying less. Because they're planning for it. They're willing to pay up front. They know they're going to have to spend...
way more upfront, but then that's going to tail off. Yeah. Not only that, but let's say you're a typical traditional enterprise. You're the 10 people, 20, 50 people that you will have to employ to do that. Each of them is going to cost you like five times more than what you're paying people right now, just to be clear.
Right? You'll be hiring people from Amazon. You will be going to Amazon or AWS and saying, not to all of them, but at least to a couple of key people, you're going to say, hey, why don't you come to our bank where you really don't want to work at all because you're the cooler place. But why don't you come to us and what can we offer you? Well, the only thing we can offer you is even bigger salary than our normal salary that you're having right now.
So those are the expenses also you need to account with, right? I guess that's true because you're not going to hire somebody fresh out of university to lead that effort. And if you've got somebody in the company that's been there for 30 years, they're not going to know how to lead that part. Yeah. They can be re-skilled. So you can have 40 of those, but you need to have five of the other ones that are going to cost more than those 40.
And they take more vacation too, because they've learned how to. Exactly. And also they have a very short fuse, right? Kind of like, don't come with the silly things. Do the work. And no more than two pizzas. Now, let me, we're talking about the enterprises. I'm going to go the other side of this. If you are a startup, you don't necessarily need the hyperscalers. You may not even need the hypercalers.
Small scalers, the DigitalOceans, the Linodes, or now Akamai. You may not even need those. It may be best, again, you're a startup, you might be able to come up with $5,000 for hardware, and you can do $1,000 a month for a colo. I don't even know what colo prices are anymore. So if you do that, you're talking under $20,000 for a year.
That's probably what you would spend in the first week with AWS getting set up because you forgot to turn things off. No, absolutely not. I mean, look, would you agree that what one person can do with Fly.io, you need three people, and I'm being generous, to do the same thing in AWS, and you need 10 people to do it on your own hardware?
That I completely agree with. Now, I'd left that out because that to me is platform as a service. If you're needing to do something in between, like if you can't just do it on fly or Heroku or the other variants of that, and you're having to actually start building an infrastructure, whether it's AWS, DigitalOcean or whoever, that's what I'm saying is it still may come out better. At least you'll know better what you got to do.
And of course, I can be wrong. I feel that generally speaking, self-managed makes more sense the bigger you are. The more people you control the problem, right? I would say small, fly higher.
Bigger DigitalOcean, even bigger AWS, size of the company, even bigger. Yeah, maybe on-prem, right? Your own data center, your own building and all that stuff, right? And there are shades of rain between all of those. But hey, if you're a startup, five people, please go to Fly.io. Please, don't listen to Tarek. Five people. Don't spend more than 15 minutes on infrastructure. Just develop your application.
So you mean as a startup, I need to be focused on actually trying to make money and solving a business problem instead of building cool tech? You know, maybe we can try that option and see how it goes. And see if we're still here next year to have the same conversation? Yeah. Yeah.
This is the same, just to be clear, we're not talking about infrastructure, you know, servers, this and that. But I think that applies to other things as well. Please don't start connecting your data center with your office with your own cables and digging the ground and all those things. Just use your ISP provider. It's fine. Don't focus on those things. Last one. Last one. I'll let that sit. That's fine.
I think this one we completely just whiffed. Wasm will become a normal workload type. Not even close. Why were we even thinking that? I don't know. I'm questioning my sanity. And I'm going, I honestly don't remember what I spoke a year ago. And you're forcing me now to go and listen to it. Because that sounds so insane.
that I'm questioning my sanity. Or maybe I'm saying now, but I was insane a year ago. That makes no sense. I mean, that Wasm might be increasing. That could be the thing, but kind of becomes a kind of de facto something, something now. And I think that Wasm is in deep trouble right now. I hope that in the past, Wasm would get out of the situation where we have cool tech, let's figure out what it's for. And I don't feel that Wasm got out of it.
And I'm starting to be skeptical whether it ever will. Because, okay, you have only one attempt to guess what is the most common type of workload that right now Vazem projects are promoting. Only one attempt. Serverless? No, man. AI. Okay.
Oh, AI. Serverless was two years ago for fancy, right? You're right. It just moved from one hype to another and kind of maybe it will stick here. That doesn't even make any kind of sense to me. It doesn't make sense to me that Wasm is good for inference. It doesn't make any kind of sense that it's good for training. I don't... Okay. Yeah. Obviously, I totally missed it. And again, I don't spend a lot of time in Wasm, so it had to have been you to brought this up. But is it just dying on the vine? I think so, yeah.
Can you remind me of Wasm the next time we speak about predictions, future predictions? Yeah. I have one about Wasm. We'll do that in a couple of weeks, the first week in January. Okay. So we'll talk about Wasm again. In fact, we'll probably be revisiting a few of these because AI obviously will be showing up again. Now you've added Wasm. Is there anything else that we missed in
Okay, and those were the predictions we had. We had four basic big bucket predictions. So it was AI-related, platform engineering, on-prem, and WASM. What happened this year that we didn't see coming? I didn't see IBM buying Hashi. That wasn't, I did not compute at all. I mean, retrospectively, I understand buying Hashi and stuff like that. But if you asked me a year ago about that possibility, I would say, come on, no, they just went public yesterday. I missed that part.
Anything else? Everything else is pretty quiet this year, right? I can because everything is focused. So there are two modes you can be running right now as projects, companies, what's not. You're doing something AI or if you're not, you are one of the very few who can really truly demonstrate with numbers how
that this is really going to be profitable. There are no more empty promises. I have an idea. Give me 50 mil more. That's not happening anymore. So the only hype is AI. Everything else needs to prove itself. Which is a bit, there haven't been, I don't think other than one or maybe two this year, IPOs. One was Klaviyo, which is a marketing tool.
I think there was one more, but I can't remember what, I mean, the tech IPOs were basically non-existent. Yeah. Most series A's were, if you did get a series A, it was typically small. Now towards the end of the year, the Laravel framework got 57 million in series A. How? I don't know, but okay. It did. It's mind boggling. Yeah. I'm guessing most other series A's that I saw throughout the year were in that three to less than 10 million range.
What does that say for tech? What it says for tech is that tech is very much focused on hypes. And hypes have a limited duration. We had many hypes and right now, not right now, short while ago, we got out of Kubernetes slash cloud native hype. We just finished the fave. That does not mean that there will be no businesses and that importance is dropping, just to be clear.
I don't think that the importance is dropping. And I think that the usage of cloud native, this and that, and Kubernetes, this and that, and what's not, is going to continue increasing, but not as a hype, not like 100x, right? That's finished. As a hype, again. So overall, for the predictions, I think we did 50-50, which is about our on-pour with what we do. It's just, sometimes we get it, sometimes we don't. This time, it was mainly don't, I think.
I'm going to go against Victor. I think Victor says probably more hit than miss, but I think we're more miss than hit. So what did you think? How has your year been? Have things happened that you didn't see coming? Now, in full transparency, we're recording this before the U.S. elections. So who knows what happened? I guess by the time you're listening to this, hopefully it's been figured out because it's been seven weeks. We'll see. That might be the prediction for 2025.
Can a presidential election be closed in under six months? Who knows? We'll see. Head over to the Slack workspace, go to the podcast channel, look for episode number 294 and leave your comments there. We hope this episode was helpful to you. If you want to discuss it or ask a question, please reach out to us. Our contact information and a link to the Slack workspace are at devopsparadox.com slash contact.
If you subscribe through Apple Podcasts, be sure to leave us a review there. That helps other people discover this podcast. Go sign up right now at DevOpsParadox.com to receive an email whenever we drop the latest episode. Thank you for listening to DevOps Paradox. ♪