cover of episode Reid Hoffman Says AI Isn’t an ‘Arms Race,’ But America Needs to Win

Reid Hoffman Says AI Isn’t an ‘Arms Race,’ But America Needs to Win

2025/2/21
logo of podcast WSJ’s The Future of Everything

WSJ’s The Future of Everything

AI Deep Dive AI Chapters Transcript
#artificial intelligence and machine learning#agi discussion#ai skepticism#politics and government#venture capital and angel investing#entrepreneurial decision making#space exploration#intellectual discourse#conflict avoidance and confrontation People
R
Reid Hoffman
Topics
@Reid Hoffman : 我对人工智能的未来持谨慎乐观态度,相信人工智能能够改善我们的生活,但同时也认识到新技术带来的挑战。我认为美国需要在人工智能领域保持领先地位,确保人工智能符合美国的价值观和利益。这不仅仅是一场技术竞赛,更是一场经济竞争。政府应该在人工智能安全和创新之间取得平衡,通过迭代开发和部署,逐步完善监管措施。同时,我也关注到社交媒体上出现的负面言论和威胁,呼吁负责任的领导力和公民行为。 @Tim Higgins : 作为主持人,我与Reid Hoffman探讨了人工智能的未来,包括其潜在益处、美国与中国的竞争、政府在人工智能安全和创新方面的作用,以及社交媒体的负面影响等话题。我向Reid Hoffman 提出了关于人工智能发展、监管以及与Elon Musk关系等问题,并对他的观点进行了深入的探讨。

Deep Dive

Chapters
Reid Hoffman, a prominent figure in Silicon Valley, offers a cautiously optimistic outlook on the future of artificial intelligence, contrasting the prevailing 'doomer' sentiment. He highlights AI's potential to improve lives through applications like advanced medical assistants and personalized tutoring, and envisions a future where AI acts as a co-pilot for various professional tasks.
  • Hoffman's optimistic view on AI's potential
  • AI as a medical assistant and tutor
  • AI as a co-pilot in professional activities

Shownotes Transcript

Translations:
中文

The Jack Welch Management Institute at Strayer University helps you go from, I know the way, to I've arrived, with our top 10 ranked online MBA. Gain skills you can learn today and apply tomorrow. Get ready to go from make it happen to made it happen. And keep striving. Visit Strayer.edu slash Jack Welch MBA to learn more. Strayer University is certified to operate in Virginia by Chev and its many campuses, including at 2121 15th Street North in Arlington, Virginia.

There's a lot of doom and gloom when it comes to predictions about our future with artificial intelligence. Think Terminator-style AI warfare. Hasta la vista, baby. Or, more personally, will it take away our jobs? But Reid Hoffman has another take.

He's on the board at Microsoft. He co-founded LinkedIn. And he was part of the so-called PayPal mafia as one of its earliest employees. Hoffman is cautiously optimistic about the future of AI, even as he acknowledges that new technologies can be frightening at first. When we have these kind of industrial revolution transitions, they can be difficult.

Hoffman calls himself a bloomer, not a doomer or a gloomer, somebody who believes that AI can make our lives better in countless ways.

He was an early backer of OpenAI and is on the board of Microsoft, two companies that are making huge bets, investing billions of dollars into AI infrastructure in the hopes that it will pay off, even as upstarts like China's DeepSeek hold out the promise of more efficient ways of developing AI.

Hoffman says that DeepSeek's models might be more dependent on its American rivals than it admits. DeepSeek didn't respond to our request for comment. Still, if it sounds like an arms race, it is. Though Hoffman likes to avoid such warlike imagery, but he does see a need for the U.S. to dominate this growing industry. And so part of what really matters when, you know, we think about this as, for example, American society is,

We want AI to be American intelligence. We want the values that we hold dear. We want our industries to be amplified and to be leading the world. And at home as well. Even though Hoffman is a major Democratic donor who backed former Vice President Kamala Harris, he says he's encouraged to see big tech leaders like his old PayPal colleague David Sachs hold influence with President Trump.

But Hoffman has stronger feelings about another member of the PayPal mafia who is deeply involved with the Trump administration, his old friend Elon Musk. Well, it's one of the reasons why I think part of the responsibilities of power and leadership is to use it in favor of a good civil process of, you know, kind of

appropriate truth-seeking of not, you know, calls to, you know, threat or violence. I think that's an important part of leadership anywhere in society.

From The Wall Street Journal, I'm Tim Higgins. Christopher Mims is off this week. This is Bold Names, where you'll hear from the leaders of the bold name companies featured in the pages of The Wall Street Journal. Today we ask, how do we ensure that we will live in a world where AI is the good guy and not the villain? And a note before we get started, News Corp, owner of The Wall Street Journal, has a content licensing partnership with OpenAI. Reid Hoffman, welcome.

We've got a lot to cover on the state of AI, tech in general, some politics, the hottest thing in San Francisco right now, the Jevons Paradox, and of course, this new book of yours, Super Agency, what could possibly go right with our AI future? Let's start with that subtitle. That's like at total odds with everything I've seen in Netflix Black Mirror or Terminator movies where everything can go wrong. So I'm just curious, why are you so optimistic?

So part of what we do in the book is we have kind of the history of how human beings have encountered various technologies, written word, printing press, car, electricity, mainframe computers, internet.

And each time we have this conversation, it's like, oh my God, this is going to crush our society. It's going to crush human agency. It'll be terrible. Matter of fact, the dialogue around the printing press is very similar to the dialogue around AI today. And yet afterwards, we have a great amplification of human agency. And the thesis is AI will be the same. And that part of super agency is to have

this kind of evolution when many of us get those superpowers together. The fact that you get superpowers also helps mine. We can get more creative, more kind of an informational GPS of navigating, but so could the doctor who's working with us. And that's part of the question around how things can go right in terms of our AI future. So I guess I'm curious,

I think there is a skepticism about big tech out there. Perhaps seeing this as empowering big tech companies or the man, if you will, or big business. And so I'd like to hear kind of how it's going to help me, the individual, how it's going to help me. It's all about me here. That's all good. It's kind of where we start as human beings.

So first, two obvious things. A medical assistant that's better than a good GP on every phone running 24 by 7 can help you with yourself, your family, your friends, community, et cetera. Second is a tutor. Every subject at all ages, infinitely patient, helping you with everything.

And then, you know, I think we're a small number of years away where every professional activity has one or more co-pilots, kind of what you use as amplification of what you do. It's a research assistant. It's a writing assistant. It's a, you know, OpenAI having released deep research as kind of a clear instance of a research assistant. And already today, if you're using...

chat GBT. You can do anything from the mundane of these are the ingredients in my fridge. What could I possibly make for dinner?

to, you know, kind of more, you know, detailed things that could involve, I'm going to have this difficult conversation with a colleague at work. What advice would you give me? It's funny you mention that because, unfortunately, my co-host couldn't be here today. He's off. So I turned to co-pilot for some suggestions of good questions to ask you, and I'll get to those later. But I hear what you're saying here about the potential of this technology. And I think, you know, a lot of people see a lot of potential and we're something –

We are in something of an arms race with tech companies seemingly falling all over themselves to announce massive investment plans. I think of Stargate announced at the White House with Sam Altman of OpenAI, half a trillion dollars to build AI infrastructure. But then we have China's deep seek that highlights a totally different approach, a more efficient, perhaps cheaper approach to

And I got to ask, you're on the board of Microsoft, which is partnered with OpenAI, invested in. Given DeepSeek, I think some, including people on Wall Street, might have assumed a pause in these big tech investing announcements. But Microsoft has reaffirmed its plans. Google recently said that it was upping its spending. Why are we not seeing more self-reflection on this kind of spending spree that's out there?

Well, the first is there's a fair amount of FUD around the DeepSeq announcement. It's almost certainly used large models from the result of large compute in order to, technical term, distill. ChatGBT seems to be one of the ones there's evidence about. LAMA as well. And likely that they actually had some access to scale compute there.

And, you know, I think part of the thing about having a large model is that you will have a choice about how you enable distillation from it. And the larger large model will also make the higher quality small models. And that's, I think, part of the reason why you see everyone going, the scale model, the scale compute still really matters in the training. And then, of course, you know, part of building all of this compute infrastructure is you

the anticipation of the fact that as you add intelligence to all of these different kind of capabilities and, you know, going and using GPT-01 or deep research or, you know, deep seek, you begin to see how much capability they are already adding.

So FUD, fear, uncertainty, doubt, a technical term, right? The idea that maybe deep seeks claims are not as good as they seem on the surface. I think you're getting at this idea they were using more chips than perhaps they were suggested. And I think the other thing we're getting at here is distillation, which in simple layman's terms, it's your AMI.

your AI model is asking a more sophisticated AI model questions about how it did its thing to kind of jumpstart its learning rather than starting at the beginning to build up that knowledge. So essentially, like me coming to you asking how this stuff works, I go to an AI model and ask it and it tells me. Yes, exactly. And so the deep seek work is critically dependent actually on scale compute. So in a lot of ways-

I'm not surprised to hear that you are of the camp that believes in spending a lot to scale up. I mean, one of your other books, Blitzscaling, is built on this idea of if it was a race to build a tower, to win building towers, you want to be the fastest one to build that tower, right? I live here in San Francisco, and after we're done,

I'll probably go to my local coffee shop where I'm bound to hear somebody talking about the Jevons paradox, which is all the craze these days, in part because of that late night tweet from a certain Microsoft CEO. And for folks out there, what is this? What is this paradox? Why is why are we all talking about it? Well, the Jevons paradox is named after the English economist William Stanley Jevons. And it's an economic theory that says when you make a resource more efficient, that

that can actually increase its consumption. And because the lower cost can actually lead to a massive increase in demand, it's like there's nearly infinite demand for electricity at certain lower prices. Obviously, energy is most classic, but we also think that that will also be true for AI because making everything smarter can just lead to improvements all over the place. One of the things that it seems like

The conversation about DeepSeek really highlights that China is in the game or that it is trying to be competitive. I think when you read what experts on AI say, when they read the kind of the information that DeepSeek has put out is they're generally kind of impressed about what's been occurring. And it says to them that China is a real threat to the U.S. in kind of this arms race, if you will. And, you know, I wonder what you think, where you see the U.S. government needing to be involved in

You kind of get into this in your book with this idea of sovereign AI. And where's that role? Part of what we're in the middle of is the cognitive industrial revolution and the same kind of amplification of economic productivity that the industrial revolution did and then caused industries to be created, to get superpowered. The same thing is going to be happening here. And so part of what really matters when we think about this as, for example, American society is

We want AI to be American intelligence. We want the values that we hold dear. We want our industries to be amplified and to be leading the world, setting the standards. We want our AI to provide intelligence to industries in other countries. And this is an economic competition. I actually tend to steer away from the terms arms race because I think it's actually – there is issues there. But first and foremost is –

is kind of the creation of an industry connected, of course, to national security and all the rest. So I think it's how do we make sure that our industry, which is strong and across the board, and we have a significant number of leaders in this, are developing in ways that are helpful to American society and American industry, develop in ways that provide American leadership around the world. And that's part of you were gesturing at Stargate, part of the thing that Sam Altman was out

recruiting a lot of investment to build data centers, energy power, because you do need scale in this. I think that the government enabling that is really good. I think the government enabling our industries to take risks and build out this technology within intelligently accelerated and guided ways

is I think part of the way, at least, that the U.S. government can participate in helping shape the future of American intelligence. So we just discussed why Hoffman thinks the U.S. needs to win the AI race, but what role does the government play in making sure it is safe while encouraging innovation? You kind of lose control

economic race and advantage for leadership in the cognitive industrial revolution, but you're also losing the increased safety features. Stay with us. Imagine what's possible when learning doesn't get in the way of life. At Capella University, our game-changing FlexPath learning format lets you set your own deadlines so you can learn at a time and pace that works for you.

It's an education you can tailor to your schedule. That means you don't have to put your life on hold to pursue your professional goals. Instead, enjoy learning your way and earn your degree without missing a beat. A different future is closer than you think with Capella University. Learn more at capella.edu. I mean, we're kind of dancing around the term that I think a lot of people in Silicon Valley get worried about is regulation.

And part of what is interesting watching about this kind of new industry evolve is the messiness of it all, the drama. Right. And I reminded of more than 100 years ago as you watch the car companies race on Sunday and sell on Monday, a saying that got at the idea that these startups of the day, the Fords, the Cadillacs,

were out there learning and making iterative changes as they went. And in your book, it seems like you see that as maybe perhaps a roadmap, if you will. Cars roadmap. I'm pretty good. Pun intended. Thank you, co-pilot, for how we should be regulating AI. And I'm curious about that. How do you see that? How do you see that history kind of informing the future?

First, is you say, well, are there any things huge and catastrophic can go wrong? Let's make sure we just don't get there. So that's the very limited focus things to be kind of regulatory in advance. What would be an example of something like, we do not want that? Well, say you're enabling cyber attack tools in ways that would enable terrorists or criminals at high scale.

Seems like we can agree on that. Yeah. We want to try to prevent that in advance. By the way, the industry has an interest in that too, so they're also working on it either way. Then...

As you're driving down the road and you're discovering which things actually you also need to add into regulation, and sometimes government will be necessary. So like the car industry didn't want to add seatbelts. Consumers didn't really have demand for it. But when we're looking at it and saying this will save a whole bunch of lives, we add in seatbelts through the iterative development and deployment. And that's exactly what we should be doing with AI today.

And by the way, the industry also figures out other things, you know, window washers, airbags, crumple zones, etc.

All of this is part of the way that you kind of deploy, learn by lots of kind of feedback from consumers and society, and then improve. And this is part, of course, what's magical about ChatGPT, because with hundreds of millions of people getting exposed, you can begin to go, oh, this really works. This doesn't work so well. This is an issue. How do we steer it?

And that's how we can get to good governance, some of which will include kind of focused regulation. I mean, since we're talking about the car business, I mean...

Part of the reason why some of these laws and regulations came about is because people could physically see the people getting run over. People were going too fast on Woodward Avenue in Detroit, and so they needed to have a stop sign, right? I think one of the things that scares some people in this space is that some of this AI technology is in a lot of ways like a black box, and it's not quite sure how they're coming to the way they're doing things, and they're not quite sure the unintended consequences.

And some have said, hey, let's put a pause on this. I think you probably are of the camp that you don't want to pause development here. What's the risk of pausing? Well, there's a number of risks of pausing. One is you kind of lose control.

the economic race and advantage for leadership in the cognitive industrial revolution, but you're also losing the increased safety features. And it's one of the reasons why, for example, in the general development of technology,

and industry why the US over the last 10 years has doubled its GDP relative to Europe because it's a, hey, let's do this iterative development. Let's keep building versus have regulatory pauses. Now, I do agree that

People do have worries, and some of it is the black box and uncertainty. But that's part of the reason why I haven't a lot of people play with it. That's part of the reason why, you know, I've helped stand up the UK, US and other safety institutes to be trading best practices and engaging with the commercial side on doing the right testing before you deploy. And when you deploy, are you collecting the right information in order to improve?

And part of getting to a better aligned future is part of the reason why we want to keep moving. I think one of the things you really do in this book is kind of explore the dance between innovation and government and the importance of that. You are cautiously optimistic about AI. Are you still cautiously optimistic about AI with the current administration? Well, I think, you know, there's a few very promising things. One is getting...

People involved who actually come from industry have a technical depth in knowing what's going on. David Sachs, for example, I think. David Sachs. You know him. You go back to PayPal with him. Yep. And so I think there's a number of folks that they're kind of including. And I think that's really good. I think also the fact that realizing that this is really important for America's industry and position in the world.

My sense from some of these tech leaders is that they didn't feel that they had the ear of the Democratic administration, not like they did perhaps during the Obama administration. And I'm not talking about the Elon Musk's feeling slighted about, you know, not getting invited to an electric car event, but more broadly. And it makes me wonder, have the Democrats lost tech? Have they lost Silicon Valley, in your opinion, or not?

Well, I think they will need to build bridges. I think there's a number of different tech folks in the Valley that align themselves to a set of things that I think are kind of core to the Democratic agenda, like providing as much elevation to both middle class and communities that could use some economic empowerment. But on the other hand, of course, there are some forces within the Democratic Party that

that tend to operate in very anti-big tech ways versus how do we harness all of our tech industry to build the future with technology to benefit everyday Americans. One of the things in your book that you talk about is how AI can help improve government.

whether it's trying to analyze where people are to come up with legislation or there's all sorts of ways that you kind of get into. But, you know, as I read it, in a lot of ways, it sounded a lot like Doge, the Department of Government Efficiency, Elon Musk's effort. Do you see similarities there? I mean, in a lot of ways, it's the idea that technology can make government better and more efficient and more inclusive of people's opinions and what they want.

Well, I think most people who understand these things are obviously strongly in favor of efficient organizations of all types, including efficiency in government. Who doesn't want efficiency, right? Yes, exactly. And look, efficiency of providing good services to citizens, efficiency of operations, like the promise all that is extremely important. And it's one of the things that I

hope for from Doge. Now, I think it's also important to try to do that in compassionate ways. I don't think one needs to be cruel on how to do it. When we have these kind of industrial revolution transitions, they can be difficult.

How do we steer towards the positive, towards the graceful, towards the more human outcomes, I think is one of the things that we should be doing. It's part of seeking the positive. And I think that's what we want for Doge as well. It sounds like perhaps more like surgical rather than blowing it up.

Yeah. And also, you know, it doesn't have to be done in a week. It can be done in a few months. It can be transitioned in ways that allow people to adjust and organizations adjust and people to adjust their own work paths and career paths as ways of doing it. We've learned that from many decades of intelligent business management. And I think that is important to do here, too. Elon Musk didn't respond to our request for comment.

Now we've heard how Hoffman's vision for AI in government is different than what Musk has been promising for the Trump administration, namely through his work with the Department of Government Efficiency, or DOGE. But what about Hoffman's relationship with his old friend? He's spreading a lot of lies and disinformation about me. It makes it a little harder to have civil conversations. That's next.

This episode is brought to you by Nerds Gummy Clusters, the sweet treat that always elevates the vibe. With a sweet gummy surrounded with tangy, crunchy nerds, every bite of Nerds Gummy Clusters brings you a whole new world of flavor. Whether it's game night, on the way to a concert, or kicking back with your crew, unleash your senses with Nerds Gummy Clusters. ♪♪

You have known Elon Musk for a very long time, going back to the days of PayPal, where you both were. How do you see Doge playing out under Elon Musk? It seems like a lot of chaos now to a lot of people. Well, it seems like he is approaching Doge somewhat the way he approached Twitter, which is quick and wrecking ball.

There's obviously virtues to being aggressive and decisive. It's part of the reason why Elon's such a great entrepreneur. I think in government services, I think one should maybe be following at least something of a stage process, but time will tell how this plays out. What do you think of his relationship with President Trump? Is it

fueled by his political views, as he's kind of talked about in the last 18 months very publicly? Or do you think there are business motivations behind it? I don't know. I haven't had any conversations about it. So, you know, I think that's...

Yeah, I think you'll have to ask other people those questions. I guess I ask you, you were friends with him. I wonder if you consider yourself a friend of his anymore, given how critical he has been on X in the past few months during this very political season, taking a lot of shots at you.

Yeah. Look, I think he's spreading a lot of lies and disinformation about me. It makes it a little harder to have civil conversations. But I've read, and tell me if this is not correct, but you felt your safety was at jeopardy, that you had to hire personal security because of the attention you were getting on social media, because of the comments he was making against you. I think he was suggesting you were at Jeffrey Epstein parties, which are things you denied and he had no proof of.

Yeah, well, exactly. So when people make slanderous accusations, other people start organizing and start sending me threats and leaving voicemail threats and everything else. So yeah, it's a serious thing when you make these completely unfounded accusations of other people. I wonder what you've learned from that experience about the negative attention that comes from social media. I mean, you're rich.

You have resources. You can afford to get this kind of security. But there are people out there who become the character of the day, if you will, on X or other social media platforms or who come under the scrutiny of Musk who maybe can't afford that or are not accustomed to the kind of the attention that happens. It's chilling. Yeah. Look, I think it's I think, you know,

Delivering really gross slander threats like, you know, strangely, the democratically elected prime minister of the UK should be put in jail. I mean, this kind of stuff is, I think, you know, irresponsible and shows a lack of care to what happens to other people involved.

You know, based on your actions. We've heard Elon Musk in the past talk about how he sees X, these interactions is almost like a playground fight that he almost is joking around. But there seems to be some real ramifications, as we've seen in the last few months. People are motivated by what they see here.

Well, it's one of the reasons why I think part of the responsibilities of power and leadership is to use it in favor of a good civil process, of kind of appropriate truth-seeking, of not calls to threat or violence. I think that's an important part of leadership anywhere in society.

I know you're optimistic, cautiously optimistic about AI. Since my co-host was off this week, I turned to co-pilot Microsoft's AI for a little help. To end on a light note, if you will, it suggested some jokes that it thought you in particular would like. One of them, why did the LinkedIn user get kicked out of the party? Because he kept trying to, quote, connect with everyone. Why don't programmers like nature? It has too many bugs.

These AIs at the moment are full of dad jokes. Well, I think it is ensuring we have agency over humor, at least in the near term. Reid Hoffman, thank you for being a guest today on Bold Names. My pleasure. Thank you very much.

And that's Bold Names for this week. Michael LaValle and Jessica Fenton are our sound designers. Jessica also wrote our theme music. Our producer is Danny Lewis. We got help this week from Catherine Millsop, Scott Salloway, and Falana Patterson. For even more, check out our columns on WSJ.com and let us know what you think of the show. Email us at boldnames, all one word, at WSJ.com. I'm Tim Higgins. Thanks for listening.