cover of episode Vinod Khosla’s View of the Future, From AI to China

Vinod Khosla’s View of the Future, From AI to China

2023/7/31
logo of podcast On with Kara Swisher

On with Kara Swisher

Chapters

Kara Swisher and Naima Raza discuss recent tech developments, including Twitter's rebranding and increased regulatory scrutiny on AI, setting the stage for their interview with Vinod Khosla.

Shownotes Transcript

On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org.com.

On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org slash bots. It's on!

Hi, everyone. From New York Magazine and the Vox Media Podcast Network, this is Kendall Roy pitching you the X Everything app. Just kidding. This is On with Kara Swisher, and I'm Kara Swisher. And I'm Naima Raza, and I'm back. Yes, you are. So much has happened in the tech world. There's a lot. There's always something happening. No cage match. Well, that was never going to happen. Besides you and Vinod Khosla, which we'll get to in a better episode today. Yeah.

But the rebranding of Twitter to X, a rebrand more mocked than Meta's, the announcements around Elon's separate company, XAI, and also increased regulatory scrutiny from Washington on a couple of fronts, on monopolies and on AI. Yeah, absolutely. Washington's getting it together on AI more quickly than before than anything else. And there is a series of legislation happening around tech. We'll see if it goes anywhere. It never has before, but it's encouraging that it's early.

And there's a series of lawsuits, whether it's Sarah Silverman and there's more to come. People like Barry Diller, I think, have threatened to sue AI companies. So I think people are thinking about not letting it happen again when it happened with the first round of tech taking over everything. Yes. And the Sarah Silverman case is really interesting because her argument is basically like, look, my jokes are being ripped off. Copyright.

It's a copyright case, and it's not unsimilar to when Viacom sued YouTube. But in this case, it's more significant because they're mashing up other people's—it's plagiarism. I don't know how else to put it. I keep saying that. I remember I was talking about copyright being the most important legal thing, and I think that there will be a lot of that in these companies. OpenAI just licensed AP stuff.

So there's a lot going to be happening here. This is the very interesting thing because OpenAI having gotten out ahead of the gate and having a first mover and managed means that they have actual customers, revenues, et cetera, that can support their paying licenses. Yeah. Subscription. Yes. As a result, they're able to build a bit of a moat around them. They're able to afford these license fees. It'll become very hard for smaller companies to do so. And that,

the ability of a company like OpenAI to get so big begs the question of like bigness, which is this unlikely alliance of Elizabeth Warren and Lindsey Graham. Yeah, I know the pair, the two pair you want to go out to dinner with together to have some fun. No, they're getting together to try to do a number of things, including create an agency around tech, which I think is an interesting idea. It's something I've thought of a lot.

every other industry, whether it's Wall Street or whoever has agencies around them, FDA, FTC, you know, every other major industry has agencies attached to it and tech does not. So the question is, should there be a separate agency dealing with algorithms and AI and privacy and things like that? And, you know,

Right now, the FTC does that and the Justice Department to an extent, but it's all based on enforcement and lawsuits. And so maybe there should be someone who's more regulatory on a day-to-day basis. We'll see if it passes. Do you think tech is really, in many ways, like when I think about your reporting over decades and you talk about the bigness and the power and the lobbying power, the power over consumers that companies have,

Strikes me that it's a history of capitalism in America. But do you think it's something very endemic to tech that it

require something different. No, they're just like regular capitalists. They deserve regular regulators. That's all. It's not, they're not special by any means. And they're just, they're always trying to make money. And they have more power than, well, I mean, the railroads had power, right? And then they didn't. Yeah. You know, the television industry was a very, they're all consolidated. So there's all kinds of, there's all kinds of versions of this, but they're more powerful. And we live in an age where just the distribution mechanism is that they are celebrated.

So much. We've lionized a lot of these founders and also a lot of these venture capitalists over the years. And Twitter has been a big part of that or X, whatever, whatever it is, but has been a big part of it, boosting the voices of these people. Yeah. Senators Warren and Graham did have a piece in the New York Times last week saying when it comes to big top enough is enough. I think that's probably a right thing. I just don't think it's going to pass. That's all.

You know, there's not a taste for new agencies. Yeah. But I think they're right in this one line. Nobody elected big tech executives to govern anything, let alone the entire digital world. If democracy means anything, it means that leaders on both sides of the aisle must take responsibility for protecting the freedom of the American people everywhere.

ever-changing whims of these powerful companies. They're unaccountable CEOs. I think that's absolutely right. And Barack Obama said a similar thing. Tech is not going to solve our problems or the really hard ones. They just leave us the hard ones, and then they take all the good stuff. But with President Obama, one of the critiques that he's had from many people, including yourself, and myself, is like, well, we let it get too big. Well, at the end, he changed his tune quite a bit. He changed his tune. So let him do it. Let him do it. So our guest today is a longtime venture capitalist, Vinod Khosla, someone who's been

in the Valley for, I don't know, 40 years. I don't know, a long time. He was a co-founder of Sun Microsystems in the 1980s, a former Kleiner Perkins partner in the 1990s, and then started Coastal Ventures, his own fund, in this millennium. And he's also one of your favorite people to battle with on Twitter, Kara.

Yeah, no, I don't battle with him as much. I battle with much different people. But he's very, you know, I've known him for years. You know, he's been at the forefront of a lot of stuff. He's been very early. He's a genuine techie and a genuine business person and really does think way ahead. He also is also willing to debate, which many of these people, they get to a certain point and they just don't want to be bothered by irritating reporters who...

may know as much as they do. He's recently been doing a lot more and we have been talking about doing this interview for a while, but you were a little reticent. I really wanted to do it. You did not want to do it. Explain. I've come around. I thought it was a really great interview and I think he's, I wanted to do a sort of

I would rather do a startup person, a new startup person in AI and lean forward. But that's just my tendency. You know, I just think he's a legendary venture capitalist. If we did John Doerr, he's in that range of John Doerr and him and several others. Reid Hoffman, I'd say. So we might as well do Vinod, too. I thought he was important for two reasons. And one is his early work on climate and AI, which he...

He started writing about AI as early as 2011. He did. And seeing it as the next platform after mobile, which was early to that game. And then the second reason I think he's a really interesting character is because of his global view. I haven't spent time in Silicon Valley, Stanford, in Edson Hill Road. I would say that people like there's a...

There's less globalism in Silicon Valley than I would expect for what has been the frontier creator of technologies. And there's been this kind of like build and they will come philosophy. Someone like Vinod has an appreciation, I think, of the geopolitics and the role of China and India tech giants and the competition from them as well. Yep, he does. He definitely has much more of a globalization.

Most people in Silicon Valley just are American, American, American. So, you know, and he is too. He's invested in mostly U.S. companies, but he certainly has a global perspective. All right. We'll hear that global perspective when we get back. Let's take a quick break and we'll be back with Vinod Khosla. This episode is brought to you by Shopify. Shopify.

Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech. All right.

How long have we known each other? Long, long time. Many decades. More than... 25 years. Yeah, I think I met you in the 90s when I got out of here in 1996. I think you're one of the first people I called because you were obviously well-known for creating all kinds of companies, but then you were in venture capital at

time. I visited two people, you and John Doerr here on Sand Hill Road, where you still are. You're still on Sand Hill Road. Yeah, we're still on Sand Hill Road. Where were you at the time? What firm were you at? I was at Kleiner. You were at Kleiner. I was at Kleiner from after Sun. Yeah. When I joined John Doerr. Sun Microsystems. Until 2004 when we started Coastal Ventures. But actually we started it within Kleiner Perkins. So the first two years we stayed inside their building and then got to Bing. Which is just down the road. And I visited the two of you. I remember that.

A couple of different people I visited, but you were among the first. But I'm going to start talking about, we're going to talk about a lot of things. We're going to talk about AI, geopolitics, climate tech, your tweets. I'm very interested in your latest tweets, Vinod. I'm not sure what's going on. I'd like to hear about it, but we'll get to that at the end.

But you were very early to the topic, writing about AI for TechCrunch in 2012. Mustafa Suleiman told me in a recent interview that if you look at the pace of change in AI, it's actually been quite incremental. We've been working on this for decades. It's data and computing power that have grown exponentially in the last few years. I'd love your thoughts about why you started to write about it so early. And are we about to see exponential improvement in AI in the years to come?

Let me start with an analogy. When we first met, 1996, that's the year we started Juniper. Explain what Juniper does for those who don't know. Juniper is a TCP IP router company or a router for the Internet. It was a time when nobody in the world believed that TCP IP would be the protocol for the Internet. Cisco had gotten...

Essentially, all TCPIP plans off their docket and bought a company called Stratacom. The reason why that's relevant, though, is Sun adapted TCPIP in 1982. It was puttering along at a slow pace, like AI was puttering around for a while. Then in 1996, when we started Juniper, it saw an inflection.

When we started Juniper, not a single customer said they would buy it. We built it anyway, very much in the field of dreams kind of vision of things. But it was clear then that this puttering low end of the exponential was already happening. The reason I bring that up and why Juniper was such a large payback for Kleiner, about $7 billion in distributed prospects on a few million dollar investment, is

was we guess right on the exponential. In 14 years, I've watched AI do the same thing. Right. Putter around incrementally. When I first wrote my piece on do we need doctors, do we need teachers in 2011, it was late December 2011 when I wrote it, January 2012 that I published it.

It was clear AI was going to be large to me. To you. And at some point, capability would explode for two reasons. One, there was effort being put in, but the best minds were moving to AI.

in the potential existing for very large progress. Right. So let's talk about that, the future impact. First, health care. In 10 years, you have talked about this, and you caused quite a bit of controversy when you said it, and I think you were correct at the time, of how A will change the experience of medical care. Well, it's very, very clear that expertise will be very close to free in a GPT-5, GPT-6 world. Mm-hmm.

And if you imagine that, then a physician's expertise will be in an AI. Definitely in the next five years, whether you're talking about a primary care doctor, an oncologist, so specialists, or mental health, a therapist, a psychiatrist, those will be in AI. Whether we allow them to practice or not is a different regulatory question. We can get to that.

But the capability will be there, and many countries around the world will be using that capability. So a doctor that's not a person. A doctor that's not a person having more knowledge than the median doctor, or 90% of doctors probably where I would set the benchmark in 5E. Wow.

Wow, 90%. So everybody can get a quality doctor. And it'll be in conjunction with a doctor, possibly for the human element of care. Which is looking in your eyes and telling bad things. Looking in your eyes, giving you a hug when they have to tell you you have cancer. Right.

But otherwise, an AI will know more and have better intuition or whatever you want to call it. It will have a broader range of knowledge, better knowledge of state-of-the-art research. And outcome. The best therapies. Every possible interaction. Do you know drug interaction kills more people than breast cancer in this country this year? Mm-hmm.

And so you can avoid those, those avoidable errors. Right. And that should happen if the medical establishment lets it happen. Okay, but 90% of doctors, meaning almost all of them. Almost all of their expertise. Right. So what's their use of the 90% that gets their expertise? Well, the number I've used in the past is,

80% of what doctors do will be done by an AI, but the 20% will provide much more of the human element of care, which they don't have time for today. Right. You know, to make you feel like they can. So they're still useful. In other words, they're still useful. I think most AI systems will be used in conjunction with humans and humans who care about what they're doing.

Okay. One of the things you told Semaphore in 25 years, 80% of all jobs are capable of being done by AI. You called this, quote, the opportunity to free humanity from the need to work, though you admit it is, quote, terrible to be the disrupted one on the way to utopia. Talk about that. How is that level of job loss utopian? Now, I have said a similar thing, like there's no reason for many people to be doing what they're doing.

So let me be precise. What I said is 80% of 80% of all jobs. So 64% of all jobs. Okay, all right. Okay, 64%. Okay. Yeah. Still a lot, Vinod. Yeah. A lot of jobs will be displaced. Now, we've gone through that displacement before. You know, in 1900s, agriculture was 50% or so of all jobs displaced.

By 1980, it was 4% of all jobs. But I do think this time is different because we've exceeded the capability of human beings, if I'm right, in the next five years. And most of these jobs will not need to be done by a human. And humans will have other jobs to do or other things to do.

More importantly, there'll be enough abundance. And I wrote a piece about 2014 where AI will cause great abundance, great productivity growth, great GDP growth, almost everything economists measure, and increasing income disparity.

Because people don't have jobs, right? People don't have jobs. So what do we do with all those people? Now, the good news is the abundance will allow for redistribution. If you take a simple metric like GDP growth for the next 50 years in this country, and you assume 2% a year, GDP will go from 70, 75,000 to 175,000, that 2% per year real growth over 50 years. If AI accelerates that, turbocharges it to 4%.

the GDP growth will be $475,000. If that's the case, there's ample room for redistribution, something the Republicans hate. But I do think it'll become essential and there will be enough to afford the minimum standard of living for everybody in 2004. To pay them.

Pay them to live. Yeah, pay them to live or do things that are useful but not in today's jobs. And so when you think about it that way, that's a profound shift in the way we live, which the U.S. particularly has not been open to. How do you make that shift without it being...

The farming change was very, we don't remember it, but it was quite difficult from a social perspective. And people were very dissatisfied, to say the least. The same thing's going to happen here. We're already in a politically polarized. You know, it's romantic to talk about disruption. Like I said before, it's not fun to be disrupted. Right. So what has to happen then if people like you don't need to exist anymore in the work sense, at least? Yeah.

You know, most people don't want the work they want. Do you want to work on the same assembly line, assembling the same car for 30 years in a row? No. Nobody wants those jobs. They have a need to work. And I think the need part will go away. Mm-hmm.

I'd be quite happy maintaining a hiking trail out there, right? That's a great job for me. Right. Which AI cannot do. Which AI probably can't do. A robot could. Or I may still prefer to do it, even if an AI can do. There are jobs people will want to do. I love my job. You love your job. Most people don't work in a job they would do for free if they didn't need to. So would AI replace venture capitalists? Could they figure out investments better than you?

You know, can AI do investments? Possibly. You're pausing. Let's get rid of the doctors, but venture capitalists are a special group of hummingbirds. No, I don't subscribe to this. My job is special. You know, a sufficiently advanced intelligence will make good investments, even if you don't need to make investments. The pause was in a world of great abundance. Mm-hmm.

capitalism needs to be redefined. Capitalism, as we've seen it, is great for economic efficiency. People compete, they have to get better, things become more efficient. Capitalism works when there is no abundance, when resources are scarce and you have to make the most of them. When, in fact, abundance becomes easy,

Then you have to rethink the role of capitalism. I'm not saying we need to go to socialism. I'm a total capitalist. But how we think about the role of capitalism will transition over the next 30, 50 years, probably within our lifetime. Within our lifetimes. All right, let's talk about OpenAI. You were the first VC to invest. People don't realize when the company switched to its so-called capped profit model in 2019. Yes.

Reid and Elon, Elon Musk and Reid Hoffman were already in there who invested in it as a nonprofit. Why did you wait to invest in it when you were writing papers about it as early as 2012? What was the wait from your perspective? Well, let me be clear. We were investing in the best AI we knew how to invest in. The picture of how it emerges wasn't clear. It was pretty fuzzy. I don't believe in big vision. I'm sort of like you engage and you learn.

You know, we invested in a deep learning company called Jetpack, which failed, which would classify Instagram images and do a pretty good job of saying, I need X, give me some pictures of that. Right. The company failed. It was acquired by Google. And that's fine.

We invested in Caption Health that started to read, essentially, guide ultrasound. And we invested in a company that was self-driving for MRI machines. So you could do a cardiac MRI without an MRI technician. So we were investing for a while in these areas. Right.

When OpenAI came around, there really wasn't a mechanism to invest in sort of the core platform. It didn't exist. But as soon as I remember Sam calling and asking if we were interested, and Sam wasn't going to take venture capital firms because it wasn't clear where this was going or what would emerge. He wanted individuals who cared about AI and its trajectory. Right. Right.

So explain how small cap structure works very briefly. See, to me, that's a technical detail. It's 100x...

You get up to 100x your investment, not more, and the rest goes for societal good. And Sam called us because he thought we cared about societal good as much as we cared about sort of the profits. 100x is pretty good. 100x I'll be happy with any day. Yeah. What really was the risk was where would AI go? Would it ever generate any profits? Right. What would be the capability? Mm-hmm.

And would this be the way to get to scale in AGI? In order to have the money to do so. Now, Elon, who was the early investor, too, in the nonprofit, tweeted that OpenA had become, quote, a closed source maximum profit company effectively controlled by Microsoft. Not what I intended at all. Basically saying it was a bait and switch. He's now today started his own competitor. He's talked about it, made a lot of noise about it.

Why is it not a bait-and-switch to go from nonprofit to prop and even have capped? And what do you think of his new effort? Well, so I haven't asked Ilan about his effort. You're in a better position to ask him. No, he's not speaking to me, but go ahead. But...

At one level, he's calling for a pause in research. Right, I was going to ask you about that. He's accelerating his efforts. You know, it speaks volume on the importance of AI. You know, whether it's his self-driving cars or...

Twitter or other mechanisms. I think he believes just like I do, like Sam does, everybody does on the strong power of AI to be a powerful economic force. And so, you know, it behooves him and others to do the most they can.

What I would say is when there's such a strong economic force, lots of people will play, and that's a healthy thing. And so when he says that, how do you respond? Well, too bad. Now it's too big, right? Is that essentially now it's a thing? Well, I don't think he quite imagined it would be what it is. So in some ways, these are...

I think sour grapes, you know, and he's trying to catch up. He's trying to catch up. He was, you know, as someone asked me, I said, well, he was early and then he's late. Now he's late, which is kind of an interesting thing. What's your relationship with OpenAI now? You're just one of their investors, correct? Well, I've always had a great relationship with Sam. With Sam Altman. We share a similar vision on the role of technology in society.

I think that's why he invited us and probably the only venture firm he invited to invest in OpenAI because we had aligned goals with what was the original purpose of OpenAI, which is maximum good, societal good. So he says he's earning, and he told me in an interview, an immaterial amount from his initial investment through Y Combinator. Is that a fair statement? Is he not going to be benefiting from this upsurge?

I think Sam is very direct and very honest, so you can take his word for what he's saying. At least I believe him when he says something. He's not benefiting from the success of OpenAI, so he can be true to the original objectives. Okay. Let's talk about government's role in regulating AI. It's something that's been much discussed around the world. There's efforts in China, everywhere else. In the U.S., it's a little slower, as always.

Sam told me there needs to be a global regulatory body like the International Atomic Energy Commission. Reid Hoffman told me he would do the equivalent of a blue ribbon commission to declare a set of outcomes to prioritize and a set to avoid. What's your idea for regulation? I think any international treaty is a really bad idea. Okay. Why? It would be the equivalent of saying I'll trust Xi and Putin with the future of this planet. Mm-hmm.

Nuclear was very verifiable when used. Bio-warfare was verifiable when used. These things are verifiable. AI is not verifiable. And I'd be shocked if China doesn't have an effort to have bots talk to every U.S. voter in the next election to the best of its ability, one-on-one conversations, manipulating their views. Mm-hmm.

They'll definitely do it in Taiwan. Certainly. Certainly. You can't track it. Yeah. So the question you have to ask with regulation is what danger are you trying to eliminate? You know, there's sentient AI, which some people talked about. I think... Killer robots. Killer robots. Mm-hmm.

You know, it's silly to imagine today a robot could get away from us and really kill all humanity. It would want to and all those things. That's the current plot of Mission Impossible, in case you're interested. It's great for fiction. Yeah. But that danger, I would venture to guess, is a lower probability than the risk of an asteroid hitting planet Earth in the next X years, 50 years or 100 years. Right.

It's a risk. I'm not saying it's not a risk. It's evolving rapidly, so we have to keep very close eyes on it. But far greater risk than sentient AI is the risk that China uses it to influence global economics and hence global politics. They've done a lot.

to build influence globally outside of China, which is new under President Xi. Yeah. The Belt Road Initiative. Minerals, everything. Minerals. They have a trillion dollars in loans, outstanding donations. They own the place. They own the place. So they have the physical resources, which are essential to economic development. And then they offer AI globally for good. Mm-hmm.

But it'll come with the Chinese style of political system. And I want Western values to win over the Chinese political system. And by the way, it's a reasonable assertion for them to say most people will be much better off. Under China's rule. Under China's political system. And they genuinely believe it's a better system for society, not just for President Xi.

They believe it's a superior system. I happen to prefer our system. So that's the doomsday scenario, that it's China's century, in other words. Yeah. I'm not as bullish on China long-term, but the fact that they get ahead of us in the AI race, use it for cyber warfare,

Real warfare in the field, intelligent drones, all that stuff. But more importantly, in the social sphere, like influencing elections and opinions, there

but also real, true economic development. So you've said that that's why, as Musk and others have said, that AI should take a pause. You've advocated because of the quote, we need to win the race against China. You've also worried about, as you've noted, the existential threat. What does an arms race with China mean that we have to do? Obviously, we have to do it. We have to fight with them on this issue.

There's no question we are in a techno-economic war with China. And by the way, this great book I read recently called The Danger Zone, which argues the next 10 years, say 2033, China will peak. And it argues that authoritarian regimes get extremely dangerous when they're reaching their peak power.

Because of demographics, because of slowing GDP growth, this trillion dollars in debt all comes due and will not be repaid. So it causes real consternation internally in China, and that makes them much more dangerous than they would be if they were just happily growing at 5% a year for the next 30 years.

So I do think they're particularly dangerous because they will be peaking over the next decade if I buy this book's thesis, which I do.

And because of that, they'll deploy their AI to their benefit, both for soft power and hard power. So what does the U.S. do then? You say democracy and freedom of speech is at stake, but you're also going to make a lot of money. Is democracy at stake here or what should the U.S. do? What should the U.S. policy be right now on this? And I assume you're working in concert with government. Well, I talk to government a lot, try and influence government.

against those who are calling for rapid regulation or pauses in development, because the biggest danger we have, it's not saying we don't have a danger in AI. We have a much bigger danger if we fall behind China, especially because of the broad economic power it conveys to the nation that wins. And there isn't this notion of one winner takes all. It may not be. It might be. I just want to change the probability that they win or get ahead of us.

We'll be back in a minute.

You, in March, just before TikTok's CEO testified for Congress, you hosted a dinner with Peter Thiel in D.C. for investors and lawmakers to press China as a techno-economic threat. House Speaker Kevin McCarthy was there. What was your goal? Because at the dinner, Thiel compared TikTok to homelessness in his speech, and he said the quote, both are a really obvious problem. I'd love for you to talk a little bit about that dinner and then TikTok itself, because it's emblematic. TikTok is sort of the shiny... Yeah.

Yeah, here's the broader perspective I would give. President Xi in the 14th five-year plan in China specifically called for winning the AI race and the 5G race. Those were two things he really called for. Now ask yourself why? Because AI will exert great economic power globally, which is very much in their interest. Winning the 5G race

They already have their telecom equipment in 100 countries, and that lets them spy on nations and individuals within nations the way they can do it with impunity within China. Right.

And so I think 5G race is about surveillance of citizens everywhere. The AI race is about economic power. And TikTok is an extension of the surveillance capability. And that may not happen, but it's not a risk we can afford. So what do you think they should do? I'm going to point it

TikTok only because I do think there's a lot more important things happening like 5G and everything else than that. But what's to be done about something like TikTok? Because it's so symbolic and emblematic of the issue. Here's a very popular app. A couple of years ago, about four or five years ago, I wrote a piece that said, best product I've seen in a long time. I'm using it on a burner phone. We got to get China out of here. And I got a lot of pushback that I was anti-Chinese and anti-Asian. And I was like, no, I'm anti-Chinese Communist Party.

But now everybody's like, oh, we should ban them and this and that. What is your feeling of what to do about that particular issue? I wouldn't have a question. We should ban TikTok. Okay. Just because the potential danger of it being controlled by the CCP, the Chinese Communist Party. We have to do everything we have to do for national security and protecting our citizens. And we are very influenceable. If you look at China,

The Chinese Communist Party controls companies. In U.S., it works the other way. The companies control government. People haven't realized how different those systems are and what purpose these influence vectors serve. And we have to realize that.

You know, it's not about, I'm not saying any of this will happen. I'm saying there's a risk of it happening, and it's about adjusting the probability of these things happening. Downward, right. You know, you've criticized Apple and Tesla's business in China. Explain why it's a problem and what they should do differently. Well, to be clear, I haven't criticized Apple's business in China or Tesla's business in China. Mm-hmm.

What I have said is they can't be independent views on what is the right thing for Western world because of their interests are conflicted and Elon Musk would get shut down in China or in Russia. So he can't say what he thinks. So he can't say what he thinks. Neither can Tim Cook. I think that's a very different statement that in their own self-interest as capitalist companies, they have to do what's good for their shareholders. And that means that

They can't be trusted to represent... American interests. American interests. Should they be there at all? Should they be there at all or should they move somewhere else? Look, in the current system, it's for every company to decide what they do, not my choice. Now, we don't have a China operation and there's a reason we've never had a China operation. When I was at Kleiner, I resisted Kleiner having a China operation. It was only after I left in 2004, they started the China effort. Hmm.

So I've never been a fan because the Chinese government and the Communist Party will not let an American company win long term. They will let them strategically win short term.

but not long term. That's correct. Okay. So let's move to India and its role in tech over the next couple of decades. Sam Altman met with the Prime Minister Modi during a recent visit to D.C. and reportedly discussed collaborating on AI. I want to know what collaboration would look like. His visit was controversial given human rights records there. What do you make of him?

Well, I think Modi has done a very good job on the economy and getting government aligned. I completely disagree with his Hindu with our movement bias. I think the biggest danger for a country like India and continued economic development is

And GDP growth is leaving a population behind. So leaving the Muslim population behind, even if you were only interested in the self-interest of the Hindu population or his constituency, I'd make sure they're not left behind. 200 million people left behind is a recipe for social unrest and disaster, which could set back the economy. So my view is on that front, he's being...

politically opportune, but causing a long-term problem. But of course, politicians are okay to postpone problems by a decade or two. But it is the single biggest risk in India's economic development, I believe, leaving one population behind. And so what's to be done? Because this is not a solvable problem. Would you think it'd be good? You're so well-known in India to take a stronger stand there. Well, I don't think it's an unsolvable problem. Mm-hmm.

Especially with the power AI conveys on us, coming full circle to what India can do with OpenAI. First, India has the talent base to develop OpenAI or applications that could do a lot of good. Free doctors, free teachers, free oncologists, free robots. I've had a number of conversations along that line.

So there is a lot of good to be done, and you don't need to impoverish one population to give to another. We will have enough abundance to solve the income disparity problem or the social distribution problem. I think the minimum standards on the planet by 2050 could be awesome. The minimum standard. So is it important for you to take political standards some more promising like India if you want them to be on this journey? Yeah.

Yeah. You know, where I spend my time is a whole different question we can get to. You know, I have to trade off between working with Bob Mumgard on Fusion. We're working on a public transit system that should be in every one of the 4,000 cities around the planet instead of just the 200 cities that have public transit today. There's enough important things I'm working on. So it's a

Question of how effective can I be? By getting into a beef with Modi. You want him to develop AI versus focused on the problems you see. Well, I think those are independent vectors. And I think on the economic development front and the technology development front, they'll probably do a reasonable job. I could help, but probably not materially. Mm-hmm.

And I stay in touch a little bit. But I don't allocate a huge amount of my time to that. To that, doing that. Okay. You just mentioned climate. Let's talk about climate. You mentioned a number of projects, public transportation, fusion. At the Breakthrough Energy Summit in October last year, you said, quote, if we try and reduce carbon by 2030, we'll have much worse off than set a reduction target of 2040. Slow down? Explain.

I don't mean slow down in the research and development sense, but I do mean forcing uneconomic solutions is the wrong approach to solving the climate problem. I wrote a paper about 10 years ago, maybe 15 years ago, in which I defined what I call the Chindia Price.

It's the price for any technology at which it would be broadly adapted in India and China against its fossil competitors. I think this was in 2010. Yeah, I think Gates calls it the green premium, right? When does it end? Like at that price, it gets broadly adapted. Above that price...

is going to be just showcase, you know, dressings on the cake, not the cake itself. Right. And so you think doing that too quickly is a problem. You've had this experience. You said in 2007, the clean tech field was ripe for VCs and you said they're easy pluckings. That was a tough time for you, those investments. And I know you and I have talked about that. You had investments in biofuel, you had solar investments. Yes.

It didn't work out at the time. Was that what you're talking about? Too early? No, that's not what I'm talking about. Solar was actually quite profitable for us. Each had a different trajectory. Biofuels,

Some things worked out, some things didn't. And I think for aviation, long distance aviation, for example, biofuels is still the right answer. And Lonzo Tech is a pretty popular public company developing it. Were you there too early? Do you feel regretful over these investments or do you? Well, actually, the climate investments for us worked out fine. You know, some of the solar things worked out fine.

Some did, some did. Enough did to make it profitable for us. LancerTech is a very successful company in biofuels. Yes, we lost some companies, but we made more money than we lost, which is the key metric in venture capital. You want to build... No, I know. But when I interviewed Dor, he was like, we didn't, it wasn't the hit we wanted it to be. He was also early in a lot of those things and felt it was too early. Now, Kleiner's strategy was different than ours, right? Mm-hmm.

And I'd left Kleiner by 2004 and working on our own. Right, but he was doing it separately. You know, QuantumScape has been a successful investment in batteries, and we bet on batteries. At the same time, we bet on biofuels for cars and aviation. Biofuels for cars lost to batteries.

Biofuels and aviation looks like the lead horse still. All right. So when the biofuel company you did back, it's Cure, is it Cure? K-I-O-R. It went bankrupt. Fortune actually asked, and I'd love you to

comment on this if it was, quote, evidence that fast-moving venture capital investors are ill-suited to tackle such technically demanding, time-consuming endeavors. I disagree with that. Tell me why. You know, QuantumScape, for example, a slow-moving, long time to develop, but in a very, very large market, has been very successful. Mm-hmm.

Lonzo Tech, if you measure profits to date, has been very, very profitable, makes up for many, many cures in terms of our losses, which is the nature of venture capital. High risk bets, small percentage win. They make up for all the losses. And I think biofuels will work out for us, as worked out with Lonzo Tech. Batteries worked out for us. Impossible Foods.

Impossible Foods has been very, very profitable. Remember, we bought 50% of the company for $3 million. So it's been, even at depressed prices, it's a very profitable effort for us. And that's the nature of venture capital. So you don't feel that you were early to those, or is this the time now to really invest? One of the things, you brought up nuclear fusion. This is the tech you've invested in. Sam Altman just announced this back. Bezos, Peter Thiel, Bill Gates.

Talk about fusion, for example, which Mark Benioff has called the Holy Grail. Many people think we're decades away from functioning fusion power and plants. There are efforts to make small modular nuclear devices. People are thinking about nuclear, nuclear energy plants to restart them. Where are you looking at in climate? So my view is, and I tweeted about this recently, is

that fission will take longer to permit than fusion will take to develop from scratch. So I connected with Bob Mumgard on starting Conwell Fusion when he was a postdoc at the MIT Plasma Fusion Lab. So you think that's faster than any political... My bet is that technology will be developed faster than you could permit one nuclear fission plant here in the U.S.,

with all the risks associated with it. So I do believe that's a large solution. It's not the only solution we are working in. We have some investments in solar still, but

But for dispatchable power, which is reliable power when the sun isn't shining and the wind isn't blowing. Right, no, renewables aren't enough. The wind, solar, and water are not enough. Wind and solar are great. We've invested in those. Not enough. Not a lot in wind. Right. But you need reliable dispatchable power. And for which there's two technologies. One is fusion. And the other is geothermal.

often ignored, but we've invested in both. Both seem to be working out. And that, I think, either path would provide great amount of power economically. And you're leaving out nuclear, small modular devices, all these. So we've not invested. I used to be bullish on nuclear. I'm less bullish. So because of the permitting cycles, the social objections,

The extension in permitting, I think you could develop fusion faster than you can permit one new nuclear plant. And there's no way to overcome that because many people think it should be. I don't think we need it. I do think some of the risks are real. I do think we can address risks and we were investors in TerraPower too. But I think fusion is close enough that it will far exceed the scaling capacity of

of any other technology. All right, so let's talk, you did talk about tweet. That was a funny tweet, but you're kind of active on Twitter these days. You're getting a little trolly there, Vinod. What's happening? What tweet are you talking? I'm not even asking about your beach controversy. How is the beach, by the way? What tweet are you just talking? Oh, you have a lot. Let me read, you want me to read you a couple? I really enjoyed your English major tweet.

You said most English majors lack a goal or purpose. Why else would you major in your native language? Justine Musk, ex-wife of Elon Musk, said most English majors are women. So tech bros are basically saying that women who go to college and study literature are lacking in personalizing goals. No wonder we love them so. Why do this, Vinod? I'm just curious. It's been a curious development. It was half and yes, but not really. Yeah, I know. I do think...

When you spend $100,000 or $200,000 for college in the U.S., you'd better get an employable skill. Okay. Now, it won't be important once we... Yeah, but you were saying we're not going to be working. So what's the difference? If we aren't working and reduce the need for employment, then we'll be fine. Right. In fact, I've said that more recently. Right. I've also said, by the way, I'm not against English in graduate degrees. Okay. Just undergrad...

And purposeless people end up with a lot of debt and a job in waiting at a restaurant. So pointless. So pointless jobs. You don't like them just because... You know, English majors are not grammar. It's about literature, but okay. All right. Yeah.

Why? You know, and I'm not saying nobody should read literature. Okay. And it's not everybody. Yeah. Half the people who do English will do great because their knowledge, education, diploma, Stanford stamp, whatever. Yeah.

But I'm talking about the downside case of the people who won't get jobs because they don't have a skill. Now, this will become less important in the age of abundance in AI if you buy my thesis. So go ahead and read Dickens. Go for it. It just doesn't matter. Well, do what you enjoy. Yeah.

has the leisure of not needing to work. Need to do it. But I'm just curious why you did that. I'm fascinated by it. It was unusual for me to see it because you don't strike me as a dunker and a meme lord or anything like that, a dank meme lord. You don't have that reputation. I'm trying to encourage

more kids to think about the right things. Okay. All right. So it's not sort of the current trend of venture capitalists opining on everything from Ukraine to whatever they feel like talking about that day. Look, you can tweet because you're trying to get the most number of followers. I don't try and do that. I sort of, when I have time,

I will tweet to transmit some insight that I have. Right, right, right. So you find it useful. You find it useful. I find it useful. Sometimes it's funny and that's okay, I think.

I had a tweet around Elon and Mark's fight. What's your feeling on that? Seems juvenile. It's funny. You know, I don't think it's important enough for us to worry about it. If it makes a few people laugh, then that's a good thing. A few smiles is a good contribution. All right. OK. I think it's slightly toxic, a lot of it. I don't spend a whole lot of time on Twitter. No, I know. You recently said you probably joined Threads. Have you done it? What do you think about it?

You know, to be honest, I did join Threads because I think it could be an interesting platform. I haven't done much on Threads. Okay. So my last question, you've been in venture capital for so long. How do you look at venture capitalists now and their use? Here we are in Sand Hill Road, the famous place. It's changed drastically. Has it changed for the good or bad? Do you see innovation in your own industry? Yeah.

Look, as industries mature, they diversify. Venture capital isn't one thing. My view is at different stages in your life, you worry about different things. I'm most worried about maximum impact and a better society. And I think venture capital is huge leverage on that. Much better climate technologies, much better transportation, much better medicine, much better education. All those are really, really exciting things to do.

They won't happen on their own. If you leave the world to electric cars, then it'll be on the GM schedule by 2035. I think the forecast was we'd have 50,000 vehicles or some ridiculous number. Then an instigator of change like Elon Musk comes along and causes everybody to rethink. I think that's possible in every area, from housing to cars to transportation to medicine,

entertainment in every area. And I'm very, very excited. Frankly, it's much more fun than playing golf or... So you're not retiring? I'm not retiring. As long as health permits, with that one caveat. I'm not playing golf. I'm not going sailing. I have much more fun uses of my time that really excite me. Okay. All right, Vinod, thank you so much. I really appreciate it. Thanks.

So you and Benoit Kosla won't be going golfing or sailing anytime soon, it seems. We never would have anyway. It doesn't matter. So we're just continuing to do what we don't do together. That's true. Maybe I'll do another interview 20 years from now since both of you will still be working. No, I shall not. You will not be in the 80% of the 80% whose jobs are taken. No, never. I thought it

it was interesting that he attributed the jump in AI not to the transformer paper and not to the increasing scale of computer power, but to the idea that people are moving there. That was a leading indicator for him was that humans were moving there. Yeah, I think he's right. It's a very VC way to look at it. Well,

That's how they watch things, like where are people going, whether it's crypto or anything else. And I do think people do things in their self-interest, and so they're watching self-interest happening. And that's one of the things that drives Silicon Valley. Everyone rushes to the internet or rushes to crypto or rushes to whatever. But Kara, it's not all benevolence? No, nothing. None of it is. Shocking. Self-aggrandizement.

fuels everything in Silicon Valley. So, you know, that makes sense for me to say that. And of course, those underlying trends, the transformer paper, the scaling of power is what also those people are watching, not just their self-interest, but where are the jobs? What can they build? Yeah, but I don't even think they read. You don't think they read? No, I don't. Where can they get money? Oh my gosh. I don't know. I think engineers are...

So I feel I've met so many earnest engineers who are just excited about fixing the next problem. Well, they get bored. They get bored and they move on to other things. They have a very hummingbird-like personalities in a lot of ways. They shift to the next thing. They do. He outlined this vision of AI doctors. It's all kind of dystopian. And in a way, it kind of makes sense, like this 80% of 80%, because I've heard doctor friends say to me, look, I should be able to...

look at numbers for most of my patients and then spend more time with a smaller set of patients that need intervention. So that makes sense.

But don't you think that's a bit dystopian or you think that is what the world will look like? I don't know if it's dystopian. It's happened to farmers. It's happened to manufacturing people. It's just because it's happening to richer people. I think, look, doctors should spend more time with patients and not doing forms and other things that they're not as good at. I don't think there's anything wrong with replacing rote work wherever it may be with digital versions of that that's better. There's not any great...

in reading a radiology, like an X-ray. Well, I think there are, of course, jobs become mute with technology. That's definitely true. But I think the challenge for me was actually, he said one,

humans will have other jobs to do. And you've said this before with drivers and cars, but my big question is what jobs? I don't know. I don't know. I don't know. Nobody imagined the internet would create Uber. Nobody imagined all kinds of things. I don't know, something else, but none of us are felling trees with axes anymore. We're not doing it. Yes. I think the question, the consideration of like, okay, well, what are those jobs? And the second problem is, well, he says abundance will allow for redistribution. Sure. Sure.

will allow for redistribution and the math, but people's politics and mindset in this country of, well, we don't really want to pay for people who don't work to have health insurance or basic income. Those things are hard to change, you know? No, they're not. That's how life happens.

I mean, they didn't have gay marriage 20, 30 years ago and now they do. One out of five children in the U.S. live in poverty and they haven't found a way to fix that. So it's not like just by people suffering, you're going to change something. What has happened is that a lot more, we've got to come up with new jobs and that's the challenge rather than saying, oh no, oh no on everything is let's get together with smart people working together to figure out what the next jobs, how we should reform things. Maybe we won't be able to do it, but

saying it's not going to happen is not really particularly helpful. I think neither thing is helpful. Saying it's not going to happen is not helpful, but saying that it's assuredly going to happen is also not helpful. It is a very big problem. But it is going to happen. But it is. It is. It's like saying cars aren't coming. I mean, it's like saying that redistribution will happen or humans will have other jobs. Well, yeah, but that's his, it's his opinion. Yeah. It's not easy. No one says it's easy. I think it's just, this is, he's just stating what I think is probably accurate is that

And so this is my idea. Yeah, and which isn't dissimilar to Mustafa Suleiman's idea. I think the question is, is it going to be an idea that countries can get behind? In Europe, I think there's a lot more inkling and desire for that than in the United States, which is just built on a more different ethos. That's an old...

That's an old idea of the U.S. That's not, that has been dying for years. And so I do think we've changed a lot more than you realize. And I think there is an openness to a lot more creative solutions for people. I think people have reached a limit where selfishness is not necessarily helping us. It's not about selfishness. I think it's about whether or not people believe in big government. I think actually government's being gutted more and more. And whether people want to get behind social media

There's security programs and other programs and there's health care, all kinds of things, free education. Look at the pushback Biden's getting on loan forgiveness, which, by the way, has problems. OK, we suck. So I don't know what to say. We have to not suck or not. We will have to do something. I think that this is like the hopeful way to look at this and the way that I'm trying to look at it is, OK, this is going to be the push that we need to get to.

a more collective society and a society that addresses some of these huge underlying problems, chasms that we haven't been able to cross. But we were able to cross, for example, in the pandemic, there was a coming together. And I'm hopeful with AI that there will be a coming together to deal with some of these things.

Well, we'll have to deal, you know, again, I think there's most things spin forward, not everything. And we go backwards, but kids used to work in factories, you know, just like things change. And we didn't think they were going to change and they do. And so this is an opportunity to change in a good way. And hopefully I am heartened by government's movement so far. I don't know if it's going to go anywhere, but to say, oh, they'll do nothing again.

I hope they don't. They have an opportunity to do good government. So we'll see. But I think the difference also is things do change. And one of the things that's changing, and we talked about this a lot, is America's power in the world. I think we have a lot more power than...

than we realize. Yeah. But he was making an interesting argument about the power of China vis-a-vis the rest of the world, the entanglements they've developed, the financial leverage they have over a number of countries, like a trillion dollars to 150 countries, right? Yeah, they've got plenty of problems inside China. It's not a walk in the park for them. And we always put these monsters up in front of ourselves to motivate us. But I would take the U.S. any day of the week, twice on Sunday. Yeah, I don't...

see them as monsters or this. I think it's more the idea that there are competing worldviews and competing ways to look at economic development, competing companies that, you know, we have to think about because the power is shifting and, and,

It's just, it's something that the U.S. needs to think about. In any case, Vinod's a really smart guy and he's often very much ahead of things. And I'm glad he's still pressing forward on interesting new things. I am too. Yes. Want to read us out? Yep. This episode was produced by Naeem Araza, Blake Nishik, Christian Castro-Rossell, Megan Cunane, and Megan Burney. Special thanks to Sheena Ozaki and Andrea Lopez Cruzado. Our engineers are Fernando Arruda and Rick Kwan. Our theme music is by Trackademics.

If you're already following the show, you get an honorary English degree. If not, you get an honorary English degree. Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network and us. We'll be back on Thursday with more.