cover of episode DeepSeek: America’s Sputnik Moment for AI?

DeepSeek: America’s Sputnik Moment for AI?

2025/2/6
logo of podcast a16z Podcast

a16z Podcast

AI Deep Dive AI Chapters Transcript
People
M
Martin Casado
总合伙人,专注于人工智能投资和推动行业发展。
S
Steven Sinofsky
Topics
@Martin Casado : DeepSeek R1的发布在AI领域引起了巨大的轰动,它不仅仅是一个模型的发布,更代表了中国在AI研究和创新方面的实力。我认为,DeepSeek的出现提醒我们,创新可能来自任何地方,即使是一个对冲基金背景的团队。此外,DeepSeek选择开源其模型和推理步骤,对于AI技术的普及和应用具有重要意义。我个人认为,我们应该积极应对DeepSeek的挑战,加大对AI研究的投入,而不是试图通过限制开源或出口管制来阻碍技术的发展。 @Steven Sinofsky : 我认为DeepSeek的发布确实是一个重要的事件,但我们不应该过度解读它。DeepSeek的成功,一部分原因在于他们能够利用中国互联网的独特优势,以及相对较低的人力成本。然而,更重要的是,DeepSeek的出现提醒我们,AI的发展不应该只关注大规模的计算和数据,而应该更加注重工程上的创新和优化。我认为,未来的AI发展方向是小型化、专业化和边缘化,DeepSeek的开源策略将加速这一趋势的到来。我们应该从互联网的发展历史中吸取教训,鼓励创新和竞争,而不是试图控制和垄断技术。

Deep Dive

Chapters
The release of DeepSeek's R1 model has created a stir in the AI world, with many comparing it to the Sputnik moment. This episode explores the key aspects of R1, its implications, and the reactions it has generated. The discussion also includes the role of internet history in making sense of the event.
  • Release of the Chinese reasoning model R1, with an open-source MIT license.
  • Claims of 45x efficiency improvement over other methods.
  • Alleged $5.6 million development cost.
  • Release of reasoning traces and a follow-on image model.
  • Comparisons to the Sputnik moment and its implications for various stakeholders.

Shownotes Transcript

Translations:
中文

R1 comes out and it looks pretty good. That's not the best layer to monetize it. In fact, there might not be any money in that layer. I have yet to see the GPT wrapper. The internet is such a great example because there's no way this doesn't play out like the internet. It's actually a very big step when it comes to the proliferation of this model. It's a good reminder that there are always pockets of people innovating. WorldCom and AT&T did not predict the internet was going to come out of universities.

Two words have caught the internet by storm: "deep" and "seek," specifically a Chinese reasoning model that seems to rival others at the frontier.

That's not all. Alongside their R1 model that dropped in late January came a fully open source MIT license, a paper outlining its methods that some claim may be 45 times more efficient than other methods, an alleged $5.6 million cost, the release of reasoning traces, a follow-on image model, and the fact that all of this was released by a hedge fund in China.

Since then, there have been so many claims and claims about those claims that many are already referring to this as a Sputnik moment. But if you think about it, the reason that Sputnik, the first satellite launched into lower Earth orbit by Russia in 1957,

The reason that Sputnik still matters in 2025 is because America took all the actions that it did in 58, 59, 60, a moon landing speech in 62, all the way up to 1969 when we reached the moon. Those are the actions that made Sputnik, Sputnik. A wake-up call was responded to.

So now that we're here, how should we, whether you're a casual listener, a founder, a researcher at a top AI lab, or a policymaker, not just react to this message, but act?

Joining us to discuss this and tease out the signal from the noise are A16Z General Partner and Pioneer of Software-Defined Networking, Martin Cassato, plus Steven Sanofsky, longtime Microsoft exec, including being the president of the Windows division between 2006 and 2012.

Stephen, by the way, has also been a board partner at A16Z for over a decade and shares his learnings online at Hardcore Software, where he recently wrote a viral article called Deep Seek Has Been Inevitable. And here's why. Of course, we'll link to that in the show notes.

Both Martine and Steven have been on the front lines of prior computing cycles, from the switching wars to the fiber build-out, and have even witnessed the trajectory of companies like Cisco, AOL, AT&T, even WorldCom. So what really drove this deep-seek frenzy? And more importantly, what should we take away? Have bigger and better frontier models been optimizing for the wrong thing? And where does value in the stack accrue? Today, we address those questions through the lens of Internet History.

I hope you enjoy. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16z.com slash Disposures. Thank you.

It's been a busy few weeks. I don't know about you guys, my Twitter feed, podcast, everything, deep seek everywhere, maybe unsurprisingly. But what's your TLDR in terms of what came out and maybe also your take on why it blew up in the way it did? Because we've seen lots of releases in the last, let's say, two years since ChatGPT. The quick overview, of course, is out of essentially nowhere, a

a small hedge fund, quasi computer science research organization in China,

releases a whole model. Now those in the know, know it didn't just appear there's a year and a half or so of build up and they're really good. Nothing was an accident, but it appeared to take the whole rest of the world by surprise. And I think there were two big things about it that really caught everybody's attention. One,

was how did they go from nothing to this thing? And it seems to be a constant factor of compatibility and capabilities with everybody else. And this number got thrown around that it only cost $5 million. Yeah, $6 million. The number is irrelevant because it turns out they wrote a paper and they said, hey, we innovated in this area

set of things on training, which even here, it was like, oh, well, that was pretty clever. And then because of the weirdness that we don't need to get into of the financial public markets and how this whole thing happened on a Friday, the whole thing

The whole weekend was like everybody whipping themselves into a frenzy so they could wake up Monday morning and trade away a trillion dollars of market cap, which seems to be a complete overreaction and craziness, but that's not what we're here to talk about. To your point, there's a lot of moving parts here and there's a lot to consider. It's actually a fairly complicated situation. So

There has been this view that the traditional one-shot LLMs were starting to maybe asymptote. Like QPT-4, there hadn't been a big advancement, but then there's just going to be this new breath of life, and OpenAI released a reasoning model, which is O1, and everybody's very excited about that. And so in this grand...

tapestry we're considering, you have all this excitement about O1 and how that's going to drive compute costs and NVIDIA. And then R1 comes out and it looks pretty good. And then all of a sudden they're saying, well, if you can do it just as cheap, is this going to actually drive the next wave and so forth? And so there's a lot of buildup to O1, which led to the R1 hype. And then I think to your point, people didn't know really what to think about it. And I agree with you, it was a total market overcorrection. By the way, it's also worth pointing

pointing out that in addition to people saying, wow, this is a great model, there's a lot of like theories and rumor around, oh, well, maybe this is the CCP doing a PSYOP. Maybe it costs a lot more. Maybe this is very intentional. It was right by Chinese New Year. There's just a ton of rumors. Maybe we'll do our best to dissect everything going on. Yeah, maybe let's just do that because to both of your points, there was a lot here, right? There was the performance element. There was these quotes around costs. There's the China element. There's the virality. It hit number one in the App Store. Yeah.

There's also shipping speed. I think Martin, you shared that they released an image model shortly after and then it was released on a Friday. So there's this huge mixture of people reacting, some people who know what they're talking about and some people who don't, quite frankly. And so we're like 10 days or so out from this release, which by the way, as both of you said, that was the R1 release. There was the V3 release, what, two months ago, which was the base model. So now that we're a little bit further out.

What's the signal from the noise? So maybe I'll give you the lens of Chinese people are smart. There's one lens, the lens that I hold, which is China has great researchers. DeepSeek has actually released a number of SOTA models, including V3, which is actually probably a more impressive feed. It's almost like a chat GPD-4. And oh, by the way,

To create one of these chain of thought models, these reasoning models, you need to have a model like that, which they had done and we had known about. All of the contributions that they've done have been in the public literature somewhere, just nobody had really aggregated. So there's a thought that I hold, which is this is a very smart team that has been executing very, very well for a long time in AI. They are some of the top researchers.

The fact that they spent $6 million just in the chain of thought is actually not out of whack, which what Anthropic has now said they've spent it. OpenAI has said that they spent. And so this is a meaningful contribution from a good team in China. And so it means something and we should respond to it. So some of the outcry is warranted. I do think that we respond to it, but I don't think for the reasons a lot of people are saying.

I completely agree with that. And in fact, you also saw the people outside of that team in China sort of piling on to try to make it more intergalactic than it was. I mean, my favorite old friend of mine, Kai-Fu Lee, comes out on X and says something about this is why...

I said two years ago, Chinese engineers are better than American engineers. But the truth is, to your point about reaching some asymptotic level of progress. Yeah, like the previous base models, like the GPT lineage, seem to have asymptoted around GPT-4. Right, but what's super interesting about that is that asymptote was true

if you looked at it through the lens of the function that everybody was optimizing for. Which is, to my view, this kind of crazy hyperscaler view of the world, which is we need more compute and more data, more compute, more data, and we're just on that loop. And a lot of people from the outside were like, well, you are going to run out of data. And I would just as, you know, a microcomputer person was like, well, at some point, you're going to end up breaking the problem up to the 7 billion endpoints of the world. Right.

which will have vastly more compute than you can ever squeeze into one giant nuclear power data center. And so a lot of what they did was sort of

a step function change, not necessarily improvement, just a change in the trajectory. Yes. And that to me is the part where the hyperscalers needed to take a deep breath and say, okay, why did we get to where we were? Well, because you were Google and Meta and OpenAI funded by Microsoft, which all had like billions and billions of dollars. So you obviously saw the problem through the lens of

capital and data. And of course you had English language data, which there's more of than anybody else. So you could keep going. The way I thought of it is when Microsoft was small, we used to just decide, is it a small problem, a medium problem or a large problem? And I remember at one point we started joking that we lost the ability to understand small and medium problems and solutions. And we only had like large, which was just trivial. And then huge,

Huge and like ginormous. And our default was ginormous because we thought, well, we could do it and no one else could.

And that's a strategic advantage. And I feel like that's where the AI community in the West, if you will, got just a little carried away. And it was just like every startup that has too much money. The snacks get a little too good. So I've heard two theories of why they were able to do this. One of them is this constraint one that you've said, I think, which is actually very true, which is we've just been using this blunt instrument of compute and blunt instrument of all data. And we just haven't thought about a lot of engineering under constraints yet.

The second theory I heard, I don't know if it's true, but it's tantalizing, which is the reason B3 is so good is actually because it has access to the Chinese internet as well as the public internet, which is actually an isolated thing. We don't really have access to the internal Chinese internet. And we certainly don't train from it as far as I know, which they do. So it could be the case both things are true. They could have had a data advantage.

They definitely have the engineering constraint. Even on the data, their starting point is the Chinese internet, per se. That has much more structure to it. It's a much better training set. That's a great point. And in as much as human...

Annotated data is important here. And for chain of thought, you do want experts saying, here's how I would reason about a problem. I mean, this is what this whole chain of thought is. It's basically, what are the reasoning steps? If you want to look at a place to arbitrage really smart, educated people and relatively low cost, it's hard to beat China globally, right? And so they definitely have access to a bunch of potentially highly educated annotated data, which is very relevant here. And so I happen to be of the belief that this did not come out of nowhere. It's not a psyop.

This is a great team taking advantage of what it has. But there are still things that are very significant about it that are worth talking about. For example, the license is very significant. The fact that they decided to release the reasoning steps is very significant. Those are two things that you're not seeing headlines about, right? You're seeing headlines about all the other things that we just talked about.

You said the reasoning traces, those were released, which using the comparable O1 were previously not. Right. And then the open source license. So there's two things that are pretty remarkable about DeepSeq R1 that have implications on adoption. We haven't seen a license this permissive recently for a SOTA model. It's basically MIT license, which is like one page. You can do anything, right?

It's like free isn't free beer, for real. Yeah, for real, for real. I think at A6NZ we have one of the large portfolio of AI companies, both at the model layer and at the app layer. And I will say any company at the app layer is using many models. Like I have yet to see the GPT wrapper. They're all using a lot of models. They do use open source models and licenses really matter. And so this is definitely going to result in a lot of proliferation.

The second thing is, so a reasoning model actually thinks through the steps of the problem, and it uses that chain of reasoning or chain of thought to come up with deeper answers. And when OpenAI released O1, they did not release that chain of thought. Now, we don't know why they didn't do it, but it just turns out that that chain of thought, if

If you have access, it allows you to train smaller models very quickly and very cheaply. And that's called distilling. And so it turns out that you can get very, very high quality smaller models by distilling these public models. And the implications are both that this is just more useful for somebody using R1, but also you get a lot more models that can run on a lot smaller devices. So you just get more proliferation that way. So it's actually a very big step when it comes to the proliferation of models.

this model. Absolutely. And I think that there's this tendency to peg yourself at, oh, it should just be open, but without really defining it, which I think is important in this case. And I think that's

Because of where they came from and that they don't have a business model, that was part of what was unique about this was it was a hedge fund, like almost a side project, but not really a side project. It has this effect that like, well, we're just going to give the whole thing away. And the rest of the companies are still trying to figure out their revenue models, which I would argue was probably premature. And it starts to look a little to me like, hey, let's charge for a web server. And it's like the business of serving HTTP, not a great business.

And I think everybody just got focused on the first breakthrough, which was the LLM, which if you look back at the internet, what exactly happened was everybody got very focused on monetizing the first part of the internet, which was HTML and HTTP. And then along came, I don't know, Microsoft and a bunch of other companies to say that's not the best layer to monetize it. In fact, there might not be any money in

in that layer. And the real money is going to be in shopping and in plane tickets and in television. And even other companies, AT&T got wound up trying to monetize the even lower layer. But that's not how you're going to get to 7 billion endpoints. And I think that the licensing model really matters because what's going to happen is...

that there's going to end up being some level of standardization. Now, I don't know where in the stack or in what level, but there is going to be some level of standardization. And the licensing model for the different layers is going to start to matter a lot. Anyone who was around during the internet remembers the battles over

The different GNU v3, v4, the Open This license. I remember it very well. Right, well, you were doing a dissertation, and it turns out, even your dissertation, which part of it and how you released it was a huge issue because it could make or break a whole approach. And I think that the U.S. industry lost sight of that importance because they got so used to this model of, like, open just means open.

we're a business and we pick and choose what we throw out there as evidence that we're an open company. And I think that view isn't aligned with how technology has just shown to evolve in an era where there's no cost for distribution. Before, when there was a cost for distribution, it turns out the free model was irrelevant because you still couldn't figure out how to get it to anybody. I do want to take the other side of this because

I actually tend to agree with you. And so what you just said is, A, it could be the case that the model's the wrong place to focus and everybody thinks there's a lot of value in there. And so they're playing all these cute games with openness as opposed to distribution. And that could very well be true. But there's another view, which is actually the models really are pretty valuable.

And in particular, the model itself isn't an app. But it could be the case that if you're building an app, you need to vertically integrate into the model. It could be the case. And therefore, like if I'm building the next version of ChatGPT or we just had today Deep Research launch, it could be that the apps actually require you to own the model. And in that case, DeepSeek is less relevant because they're not building apps. And then this means that the impact of the opening AI is around topics. Right.

are not as great, right? And so I do think that there's this fork that we don't know the answer. Fork number one is maybe the models do get commoditized. You need to focus at the app layer and then the license doesn't matter. Or maybe

The models really matter up the stack, in which case the whole deep seek phenomenon really isn't as impactful an event as people are making it. So I'm going to build on that just because I want to say you're right both times. No, no. And the variable is time. And the Internet is such a great example because there's no way this doesn't play out like the Internet. Like it just has to. And what we saw was for a while building one app.

seemed like a crazy thing because you had to own Windows and you had to own Office. But then a new app came along that didn't own any of those, and it was Search. And so that's why I think a lot of people, also because of age and what they lived through, immediately jumped to, oh, these LLMs are going to replace Search. But it turns out that's actually going to be really, really hard because there's a lot of things that Search does that the models are bad at. Really?

Really bad. And so what's going to happen is a new app is going to emerge. And then when the new app emerges, that's going to get vertically integrated. And the research app is a super good example of that. And then all of a sudden other apps are going to spring up. Oh, there's Google Maps and there's search and then there's Chrome. And then it goes back and eats the things.

that it couldn't do before. And I really feel like that's the trajectory we're on. Now, it's still a matter of where and what integrates. But the thing is, is that the apps that ended up mattering on the internet literally didn't exist before the internet. And I think that's what people are losing sight of. Same with mobile. Same with mobile. They're all, everybody is complete. There were no social apps. You know, okay, fine. I get it. There was GeoCities and a bunch of other stuff. But people get so caught up on

New thing, it's going to replace something. Zero-sum thinking is so dangerous. Zero-sum, and you can think of everything as this spectrum. And when something new comes along, the whole spectrum gets divided up differently, which is what Google said when they bought Rightly. They said, you know what people are going to do with the internet? They're going to type stuff. And what are they going to type? They're going to type it, but they're going to type it with other people. Okay, so this is great. So we're actually seeing this happening now, which is someone will come up with a model that does something like in a consumer space, let's say like text-to-image model.

And then it turns out that over time, people are like, oh, it's kind of like Canva. Exactly. It's like slowly do the AI native version. You're right. Just like the cloud native version of Word, the AI native version of these kind of existing apps. The reason it's important is because it looks like Canva or it looked like Word or it looked like PowerPoint or it looked like Excel. But what's important is that they're actually different.

Nothing is going to ever be PowerPoint again. Why? Because PowerPoint, the whole reason for existing was to be able to render something that couldn't ever be rendered before. And so all of the whole product, it's 3000 different formatting commands. Like literally, that's not a number I made up. Like it's 3000 ways to kern and nudge and squiggle and color and stuff.

And actually, it turns out you don't need to do any of that in AI. So the whole product isn't going to have any of those things. And then it turns out all those things make it really hard to make it multi-user. And so then when Google comes along and starts to bundle up their competitor that's going to replace it, they're focused on sharing.

Hey, it's Steph. Look, we cover a lot of successful businesses here on the A16C podcast. And one common thread across every successful company, well, they've figured out marketing. But as channels are saturating and the supply of content increases by the second, it's hard to stand out amongst the noise. So with marketing only becoming harder to grok,

I look to two of the sharpest marketers in the world, HubSpot CMO Kip Bodnar and SVP of Marketing Kieran Flanagan, who host Marketing Against the Grain. They break down all the latest marketing trends and growth tactics before the masses catch on, all with a healthy dose of AI. Plus, you may even hear me on an episode or two.

So whether you're trying to grow a company, newsletter, YouTube channel, or just simply want to keep your distribution edge, check out the podcast Marketing Against the Grain, wherever you're listening now. So, Stephen, let me ask you this. You said something really interesting. I'm glad I did. Which is this has to pan out like the Internet. And you guys have used examples of different companies, the mobile wave, cloud era. Those are things we can learn from. But I just want to probe you further.

Is there something different here? To bring it back to DeepSea, this is very important to realize the capabilities of China. It's a very credible player. But I don't think that R1 itself as a standalone is going to have that deep of an impact. But on the internet, so there's actually these parallels when it comes to capital build-out that you see in the AI, which is it takes a lot of investment. And there's a special parallel that Marc Andreessen actually reminded me of, which people don't tend to see as well, which is in the early days of the internet, like the mid to late 90s,

A lot of investors, a lot of big money, think banks or sovereigns, they wanted exposure to the internet, but they had no idea how to invest in software companies. Like, what are these new software companies? Who are these people? Like, they're all private companies. So what did all of them do? They all invested in fiber infrastructure. So we're starting to see this thing again, right? We see a lot of banks and big investors, listen, we want to build up data centers because they don't know how to invest in startups like we know how to invest in startups, right? Yeah.

So on one hand, you could be like, oh, we're going to see all of this kind of capital expenditure and all this capital expenditure is going to go into physical infrastructure. And therefore, we're going to have another fiber glut equivalent, but a data center glut. So the counter to that point where I think is different is at the time of the fiber build out, you've had one company which happened to be cooking its numbers where it had a ton of debt to build all of this out. When the price of fiber dropped, that company went out of business and that caused a huge issue.

you have a much better foundation for the AI wave. The primary investors are the big three cloud companies. They've got hundreds of billions of dollars on the balance sheet. Even if all of this goes away, they'll be fine. NVIDIA can take a price dip. NVIDIA will be fine. So I don't think we're heading to the same type of glutton crash that other people have, which is very absurd.

appealing to draw parallels to the internet for that I don't think is there. Oh, I am completely with you on that. That part of it is going to look like the amount that Google invested in the early 2000s or the amount that Facebook invested five years later. Or people forget that Microsoft poured, I don't know, 30, $40 billion into Bing. And it's still number three or whatever, but it still doesn't matter. Yeah, I would bet. I don't know this is a fact. I'll bet Meta is spending more money on VR than it is on AI right now.

Yeah. Not just to show. And maybe Apple too, right? Oh, Apple. Also, because Apple, whatever is bigger than gargantuan is how much they're spending. And so it really isn't about the investing profile. And I think that is a super important point that you made to really just hammer home. There's a certainty that nobody's going to come out of this unscathed, but the scathing is not going to be at all what anybody thinks. And then not like what it was. Oh, yeah. Like, welcome, I believe, at $40 billion in debt.

Right. I mean, it was just one of these things where structurally it was. Oh, and there were companies that we've all forgotten about that went bankrupt over that era. Actually, there was one in Seattle whose name I'm forgetting, but that was like 20 billion just gone. To your point, these companies have had so much cash on their balance sheet. They've been waiting for a moment to invest in the next generation. Which also contributes to their willingness to scale up as much as they did. So let's talk about that. In your article, you talk about the difference between scale up and scale out.

and the natural tendency in these early parts of the wave to scale up when really there tends to be a shift towards software basically going to zero cost. So, Stephen, what do you mean by that? And are we at that change introductory? Now we'll just switch to make sure we're really talking about the technology now, not the finances. But when you're big...

you want to double down on being big. And so you start building bigger and bigger and bigger computers that don't distribute the computation elsewhere. So if you're IBM, you just say the next mainframe is another mainframe that's even bigger. If you're Sun Microsystems, you just keep building bigger and bigger workstations. Then if you're Digital Equipment, bigger and bigger mini computers. And by the way, all along, you're just doing more MIPS in the acronym sense than the previous maker for less money.

And then the microcomputer comes along. And not only did they do like fewer MIPS, but they cost nothing and they were going to be gazillions of them. And so you went from an era when IBM would lease 100 or 500 new mainframes in a year and Sun might sell 500,000 workstations to like, oh, let's sell 10 million computers in a quarter.

And I think that scale out where there's less computing, but in many more endpoints is a deep architectural win as well, because it gives more people more control over what happens. It reduces the cost. So today, you know, the most expensive MIPS you can get are in like a nuclear powered data center with like liquid cooling and blah, blah, blah. Whereas the MIPS on my phone are free and readily available for use.

And I think that, to me, has been a blind spot with the model developers now. They all do it. I mean, I run Lama on my Mac, and the first time you do it, your mind is blown. And then you start to go, well, now that's just how it should happen. And then you look at Apple and their strategy, which...

The execution hasn't been great, but the idea that all of these things will just surface as features popping up all over my phone and they're not going to cost anything. My data is not going to go anywhere. That's got to be the way that this evolves. Now, will there be some set of features that are only hyperscale cloud inference? Oh yeah. Just like most data operations happen in the cloud now, but most databases are still on my device.

So I'm smiling because this is the story from like a microcomputer guy. I'll tell you the story from an internet guy. There's the perfect parallel, which is, do you remember the switching wars? Oh yeah, absolutely. So for the longest time you had the telephone networks and they were perfect. They would converge in milliseconds. They would never drop anything. You got guaranteed quality of service. And here

Five dimes. That's five dimes. And then here comes the internet. We had none of these things like convergence was minutes, like it dropped habits all the time. You couldn't enforce quality of service. And there was these crazy wars at the time where like, why are you doing this internet stuff? It's silly. We know how to do networking. But what the switching people, the telephone people didn't get was what happens when you actually have a best effort delivery and then how it enabled the endpoints. They needed the value to be in the network and they couldn't think that way. And that really brought the internet down.

And I think the exact same thing is playing out. I actually see it a lot of the times. People, they look at these models, oh, they hallucinate. Or, oh, they're not correct at these things. But they enable an entirely new set of stuff, like creativity and coding. And it's an entirely white space, and it's going to grow very quickly. And to assume that somehow they don't fit the old model is irrelevant to where it's going to go. What I do is I just...

S slash QOS to hallucinate. Yeah, yeah, exactly. Because like, to now explain, what happened was I was going to all these meetings in the 90s with all these pocket protector AT&T people who would just show up and they would yell at Bill Gates like, QOS, QOS. And we had to go all look up what QOS was because...

Not only were we not using TCP/IP, but the network we were using never worked because it was like a PC-based network. And the IBM people... Like the NetBui stuff? Yeah, it was NetBui. I am talking to a networking genius, so I should like the ping of death. But it was just hilarious because they're telling me about QoS. I didn't know what it was. I walked them over to my office and this was like in the winter of 1994.

And I'm like, oh, look, here is a video of the Lilyhammer Winter Olympics playing on my Mac. Yeah, awesome. And it was like literally it was a postage stamp the size of an iPhone icon. And they were like, well, that's 15 frames a second. I'm like, I know it's usually like five. And like where's the audio? I said, well, if I want the audio, I just call up this phone number on your system. And then they just laughed at me. Yeah.

And so here we are, of course, all using Netflix on every device all over the world. And I think that they can't understand that these paradigms where like the liabilities either don't matter or just become features. And of course that's what gave birth to Cisco and they just went, well, this is how we've been doing it. And it all works. It only works in our crazy weird universities and in the defense department. And now that's all we use. And I want to tie this back to deep seek because.

The reason we're getting so excited about this is because we've seen things like DeepSeat come out before, and it's not zero-sum. It doesn't replace the old thing, right? It is a component of the new thing. And the new thing, we still haven't even envisioned yet, right? It's like the internet is just coming right now, and our excitement is for the new thing to come. And so when I saw DeepSeat, I'm like,

Amazing. This is another step to basically AGI in your pocket. These can run on small models. It shows that we're going forward. My reaction was not, oh shit, I need to like short NVIDIA or whatever. I think that's actually the wrong answer. Yeah. I mean, I read the let's short NVIDIA blog post that flew around that whole weekend. And I was like, are you crazy? I'm like, hey, Jensen is a genius.

B, their company is filled with geniuses. What about the TAM just expanded, don't you like? Yeah, it's exactly. And so it is super exciting. This is the scale out step just happened. And so now you could see everybody doubling down. And to your point that you made earlier that I think is super insightful and really important is this enabling of specialized models

Because that's what's going to end up being on your phone. And that's what's going to enable the app layer to really exist. To me, this is all the equivalent of the browser getting JavaScript. Yes, I do. Because once the browser got JavaScript, then all of a sudden you could do anything you needed without going to some standards body or building your own browser. Yep.

And I think that's where we are right now. One follow-up there is, if you think about how this progresses to date, I feel like the benchmarks have always been like, which model has the most parameters? How's it doing on this coding test? That isn't representative necessarily like, what device can this fit on? How much does it cost? Do we expect then a different set of

of benchmarks or things that we're judging these models by? Or should we just be looking at the app layer? Does there need to be some sort of shift that kind of moves us away from bigger, better, as you're saying, scale up and something that represents scale out?

Of course, I thought all those benchmarks were just silly to begin with. To me, they all seemed like, remember the benchmark we used to do with browsers was like how fast it could finish rendering a whole picture. And so Marc Andreessen invented the image tag in the browser. The neat thing that they did in their implementation was progressively render it. And then what that did is empower stopwatches all over the world of magazines to write who finishes rendering a picture faster.

And of course, here we stand today, like that's a thing you can measure even. That's a time and it doesn't matter. And so I think those will all go away. And we're just very quickly going to get to what does it actually do? I do think that the measure that's going to start to really matter is

will depend on the application that people are going after. Take this research stuff that just appeared like this week. Well, it turns out when you're doing research, the metric that matters is truth. And all of a sudden you're giving footnote links and you're giving sources because what's really happening under the covers, it's a little bit less of generative and a little bit more of IR. And all of a sudden vector databases and looking things up and reproducing them matter. And

And so now we're probably along the lines of ImageNet and they're going to start to generate thousands and thousands of routine tests that are like, is this...

True. This is totally an aside, but you reminded me of a kind of a weird historical errata, which is the fact that Andreessen made the image tag. So in a way, he's also the grandfather to some AI because Clip, which is an AI model, basically will take an image and describe it. The way it does it is using the meta tags in the image tags. So he created the metadata to do this. I will say back on the topic of the images, here's one thing I've noticed working with these companies where these models are actually pretty magic by themselves.

If you have a big model, you just expose it, people use them, which is very different than computers. You just put the model out there. The thing is all the other models catch up very quickly because they distill so well. So it's not defensible in a way. And so the companies that are defensible that I've seen is they'll put out a model that's very compelling.

And then once the users are engaged in the model, they find ways to build an app around that actually is retentive, right? So it'll start converging on like PowerPoint. It's more stateful and requires configuration. So that tends to be very defensible. And then the applications that use models, they use lots of models and they do fine tune these models a whole bunch. And so the last two years have been the story of the large model. It really has been. And they've been magic. Like people use them and people really like them. And the first time you're in ChatGPT, you're like, this is amazing. And now I think we're in the era of,

workflow around models, which are stateful complex systems, right? And also many models. Many models is a great point to build on that. This is what happened with user interface. The whole notion of user interface that IBM put forward was just derived exactly from their green screens and their 3270s. And they made a shelf of rules on how- For the characters. Of like exactly how the UI should be. And this is the F10 button and this is the whatever. And then-

It turns out that people were building all sorts of UI frameworks. It actually looks exactly like the browser today, where there's a zillion frameworks on the endpoint. You pick and choose. You do what you want to do. You can invent a new calendar dropdown if you want or not waste your time. It's really up to you. And I do think that

aspect of creativity is extremely important to applications. And then for apps to be differentiable and to also to use the MBA, have a moat, apps are going to also embrace the enterprise. And for better or worse, one of the lessons that we keep learning is if you want to get adoption in the enterprise, you're going to have to do a bunch of work to turn off parts of your app or to filter parts of your app or to disable it or whatever it is.

And I think the smartest entrepreneurs are going to recognize the need for sign-on, single sign-on at the beginning. RBAC and SSO are like, every time. Every single time. Because it turns out that's also a great way to price. It's not super hard. And I think so much...

dumb stuff has been done about AI and alignment and censorship and whose point of view is it and all this other stuff that there's now a whole industry that just wants to show up and tell you all the things that they don't want out of AI. And the smartest entrepreneurs are going to actually get ahead of that. And they'll be there to sell because it turns out that is absolutely

actually enormously sticky in the enterprise. And I think that we're going to see the smart productivity tools embrace that immediately. And it could be even at the most granular level of turn it off for these users or whatever.

Well, we had Scott Belsky at Speedrun recently, and to your point, he talked about Adobe and someone said, well, you have all these licensed images, right, for Firefly. Do consumers really care about that? And he was like, honestly, not really. But you know who does care? The enterprise, right? So to your point, those are two different modalities and founders are going to have to figure that out. But I do want to touch on, you know, a lot of people are talking about DeepSeek as this Sputnik moment, right?

And that can be viewed in the lens of geopolitics, U.S., China. But also, if you think about Sputnik, that wouldn't have been a moment if Kennedy didn't do his moon landing speech, if we didn't actually get there. So in other words, if changes weren't made. And so let's say you're in a boardroom, you're an advisor. I don't want to talk to the boardroom. I want to talk to the U.S. government, right? And so like for me, actually, the biggest aha of DeepSeek is nothing we've talked about right now. The biggest aha of DeepSeek is how

blind our policies have been around AI. They've been so wrong-headed. So our previous policies around AI have been we can't open source because it'll enable China. We've got to limit our

big labs, you know, we've got to put all of this regulation on top of it. And the reason is, is for safety and all this other stuff. Export controls. All the export controls so we can't enable other countries. Export controls on chips. We've talked about putting export controls on software, weight limits, all of this other stuff. Like that was our entire policy. And for me, the biggest, biggest, biggest takeaway was

The whole deep-seek thing is that's the wrong way to do policy. China has got a lot of very smart people. They're incredibly capable. They're great researchers. They can build stuff as well as we can, and they can open source it. We did not enable them. They did this even with export controls on chips, right? So there's basically all of our activity has been for naught. And what we should be doing is funding and investing in our research labs, and we should be going as fast as we can. And it really is the AI race, just like we went through the space race, and we need to win. And we have everything that we need to win,

The only thing in our way is our own regulatory. Just to build on that, the lesson is not Sputnik. The lesson is the internet. Mm.

What we learn from the internet, which Al Gore famously claimed to have invented the internet, but what he really did was invent the regulation that allowed the internet to flourish. And they could have looked at the internet and said, oh my God, this is a Sputnik moment, and then tried to turn it into what AT&T and WorldCom wanted. And they were there lobbying, trying to make that happen. And frankly, AOL wanted it to happen that way too. And so they ignored that and they went with what made the internet strong to begin with. And so what gave us

This deep-seek moment was the strength of the worldwide technology community. And so as much as people want to own it and be the singular provider, it's not going to work. The biggest difference, not to overanalyze the analogy, I think it's a Sputnik moment in the sense that it's a wake-up call for half the world. It isn't a geopolitical wake-up call. It's not about war. It's literally just about technology diffusion.

And we've had so many misfires since then. I mean, we had the whole encryption war where we tried to put export controls on encryption and all this. And, you know, although people thought we were being silly as an industry when many of us would champion this, well, you can't. It's like outlying math.

It turns out it is out of that long math. And the fact that it used those chips, well, the world's economy, as we've seen, is very, very hard to put export controls on things. Remember when we were going to export control PlayStations? Oh, yeah. No, Xbox. Like, the government came to us. Or, like, actually, 2048-bit encryption in email. Yes. Because people came to us, well, we can't have bad actors, that's their favorite phrase, bad actors encrypting their email. I'm like, well, they're just going to encrypt

the attachment themselves. And then there's nothing we can do about that. For sure. But in this case, we've actually put expert controls on GPUs before. I mean, like a perfect analog. We were like, oh, listen, you can do weapon simulation on these things. Like a PlayStation was the first to actually use the SGI. Right, right, right. If you remember that.

We're going to export control that. We can't let that into Saddam Hussein's hands, the whole thing. Total failure because it just turns out global markets are global markets. And we're much, much better in investing, which at the time we did in our own infrastructure. We did a great job of that. And I think it was a great analogy with the internet and with Al Gore. We should be doing exactly that again. And some politician needs to stand up and be the Al Gore of this moment.

I think that we will get that. So I do think that there is now a wake-up call. I think that the futility of the past four or five years of this kind of stuff is now very, very clear. And I mean that even more broadly than you were saying. Like, I mean, like, the people who wanted to control this technology at this very granular level in all these think tanks and institutes that were all aligned. I mean, the number of books written, the number of academic departments started, the number of assaults on technology companies to align.

I mean, whole meetings in Switzerland about aligning, you know, with the world leaders. That's just not how anything evolves. And the biggest lesson for computing starting in 1981 with the IBM PC or, frankly, 1977 with the Apple has been the creativity at the edge, just enabling that. And I think the problem that the regulators had was they had never faced regulating a connected world before. Right.

And I think the other lesson from DeepSeek is just, okay, the world is already connected. The world is already native in all of this stuff. So now the amount of actual calendar time it takes for something to diffuse technically is zero. I mean, DeepSeek, I think was the number I saw this morning is like 35% of the DAUs of open AI. And that's a giant spike because just all the same people are just trying it out because there's no friction. It takes no time.

And so it's so unbelievably exciting to, to be part of what's going on right now. And we just don't need to throw water on it and be party poopers.

So one thing I will say, I personally don't think this is a crisis moment for OpenAI or Anthropic. I think apps are hard to build. I think that right now the apps that they put out are very complex. They actually know their users. They have very specific use cases. And so I think for them it's a bit of a wake-up call that they can't slouch and they've got to move very quickly. But I'm still very, very bullish on our labs. I think they can stay ahead too. So again, there's this view of

DeepSeek is a crisis moment for NVIDIA, a crisis moment for OpenAI and Anthropic. I don't buy any of that. I think it's more of like a wake-up call for the regulatory environment. And then, listen, we should all acknowledge that, listen, there's going to be global competition. We need to stay ahead.

I would also say that what we should see now, the right reaction from all of these frontier folks is they should all just start building apps because the best feedback loop to build a great platform for other people to use is to be building apps. And there's this whole concentrated conversation over competing with your partners or whatever. Our industry is competition through and through. It's Andy Groves lesson. So just everybody should be prepared for these big players to compete with you.

But history has shown that's no surefire success. If Tam agrees it's 10x, there's just a lot of room for a lot of folks. Yeah. I mean, Microsoft spent 10 plus years, like a distant number three in the applications business. And it was a platform shift that all the other players ignored that caused it to win.

And so I think that the TAM is going to be 100x. It's going to be every endpoint. The revenue is going to come from the app's side of it. And then there'll be a developer side of it. It'll just be a different pricing model for different sets of scenarios. But it's going to be there.

So everything is rising right now. Since it is this positive sum growing world, do you have any thoughts just real quick on the fact that this came from an algorithmic hedge fund, a quant? Is that any different to your expectation or does that actually signal that more can participate? It's a good reminder that there are always pockets of people innovating. WorldCom and AT&T did not predict the Internet was going to come out of universities.

They did not think that a physics lab in Switzerland was going to invent the protocols that become foundational. That's so true. And they also didn't expect a failed corporate lab

to develop TCP/IP that became the standard. I mean, it wasn't like the IBM lab. It was like literally a lab that they'd all but shut down because it failed just down the street at Park. And so... Do you remember like SRI was involved? Like all these places that you don't even think about. Right. And so most of this isn't going to be even in any history that's written in five years. And I think that that is the excitement.

All right, that is all for today. If you did make it this far, first of all, thank you. We put a lot of thought into each of these episodes, whether it's guests, the calendar Tetris, the cycles with our amazing editor, Tommy, until the music is just right. So if you'd like what we put together, consider dropping us a line at ratethispodcast.com slash A16Z. And let us know what your favorite episode is. It'll make my day, and I'm sure Tommy's too. We'll catch you on the flip side.