This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech.
I'm Barry Weiss, and this is Honestly. Hi, Barry. Hey, Barry. Hi, Barry. Hello, Barry and crew. I'm calling today because I've got a question. So my question for you is... So I've been wondering about something for quite a while. I have a question. I have a question for you. I have questions. I have questions.
And today, another installment of Quick Question, this time with former Google CEO and Alphabet chairman Eric Schmidt. Eric, thanks for doing this. Thank you, Barry. Okay, so by way of introduction, Eric, you've been working in one way or another on creating the digital world that we all now live in since the 1970s. You spent your summers in college as an intern for Bell Labs, where you helped build a revolutionary computer program called Lex.
Then you were a leader of some early startups that built out what the internet would become. And you were so good at running these companies and expanding what was possible that you were the guy that was picked by founders Sergey Brin and Larry Page to run a little company called Google. Google me, baby. You better ask somebody. You better Google me, baby.
You were CEO there as Google won the battle for search. Sorry, Ask Jeeves, rest in peace. And then you oversaw its expansion into a kind of company that I'm not sure has ever existed before. It grew from Google Search to Google Mail, Google Chat, Google Maps, Google Meets, and now it's so much more. It's a smartphone company. It's a video streaming company. It's a cloud storage company. And it has almost 150,000 employees.
It's hard to even summarize what it is, and that's in part because it has more political power than many nation-states.
You have been a pioneer at every chapter of our big tech revolution so far, and now you're looking at what's coming next. You recently chaired the National Security Commission on Artificial Intelligence, and your most recent book, written with Henry Kissinger and Daniel Hottenlocher, is called The Age of AI. Among other things, I want to know just how long I have until the robots come and eat me. So let's jump right in.
Eric, what is your first memory of interacting with what is now known as the Internet? When I was a graduate student, I worked at, and thank you, Barry, for all of that. I worked at Xerox PARC. And the foundation of the Internet was invented by Bob Kahn and Vint Cerf in roughly 1971.
And in the early 1970s, there was a research network that I could use at Xerox. And I didn't think that much of it. There were 30 or 40 nodes on it. I didn't understand that networks have this property of expansion and that when they start to expand, they accelerate. And I never saw it.
the impact of the internet globally. In fact, when I was a graduate student, I built an email system that had a property that you could just rename yourself to anybody else because there was no possible misuse of the internet, was there? That's how naive I was. When did that understanding start to change? I think the industry and certainly myself didn't really understand what would happen until in the early 2000s.
The early Netscape success, which was 1993-1994, created this bubble, but everyone did well. You know, cancer was cured during the bubble and everyone had suntans and there was enormous wealth and we all felt incredibly powerful. And then when it all collapsed and the Internet was seen for something real, we then discovered that there were all sorts of ways it was being misused. Huge security issues. To show you how naive I was,
My friend and I were working and we happened to be in Myanmar and we were at a lake, this beautiful lake in the middle of nowhere, and above us was a fight, conflict between the Rohingyas and the Buddhists. And the internet was just starting in the country. And I, of course, pronounced in my typical, completely wrong way,
that the internet would be used to soothe tensions, that it would allow people to communicate and understand each other better. Well, what a naive error on my part. The internet was used to amplify the hatred between the two groups. Today, this occurs routinely and we don't stop it. We have a problem that our information technology, and in particular these targeting engines, are being used to take what is worst about human behavior and make it even worse.
Part of the reason we were naive is that this technology is built by people like myself for people like myself. And most of us never went to the college classes in psychology to understand human motivation. So we just got it wrong. We didn't understand. I mean, Alexander Graham and Bell, when he invented the telephone, didn't sit there and said, I'm worried about the telephone being used by criminals.
Right. And yet we should have worried in the 90s that the Internet would become part of humanity and it would be used by evil as well as good. Well, one of the problems that you are particularly focused on at the moment is AI. And you've recently warned that we're not ready for what AI may do to us. Right.
What are you seeing, Eric, big picture on the question of AI that you're trying to alert the rest of us to? Well, you know, most people think of AI in the movies that they see, which is inevitably the female robot kills the evil male scientist. Ex Machina, the best movie, yes. By the way, that might actually be justified in some cases.
But we're not busy building those killer robots yet, partly because we don't know how to make software that has its own volition.
What you see in the movies where the robot says, "Oh my God, I have this threat. I have to think it through and I have this thing and then blah, blah, blah," and then they interact in a negative way with all the humans. We can't do that yet with AI. But what we can do is study patterns with AI in an incredibly powerful way. We can translate language, we can make recommendations, and we can generate new science.
So the good news about AI is that you're going to see an enormous transformation in science. So what does this mean for humanity? And that's why we wrote the book. It's good in the sense that we'll live longer and so forth. But it also allows people to, for example, target individuals with specialized messaging. The concept of coexistence with a new intelligence is
A different intelligence than human, but a human kind of intelligence is really unexplored. How would you feel if your day-to-day work partner was a computer that appeared to be smarter than you as long as you asked it a question? You're a former New York Times reporter. You can say, write me a column about this, and it writes a better column.
Now you check it, of course, because you want to do your job and then you submit it and you get paid for it. How would you feel about yourself? Would you feel that it was smarter or that you were just a smart tool user?
Give you another example. You have a child, two-year-old, they get a smart bear. The bear gets smarter every year because we get a new bear and the kid gets to be 12 and the bear and the kid is best friends, right? Bear's watching TV and the bear says, I don't like this TV show. And the kid says, I don't like it either. Oh my God. And you say, well, that's probably okay because it's a really smart bear. But what if the bear is also learning at the same time?
In the book, we spend a lot of time talking about the fact that these systems are not only intelligent, but they're also learning at the same time. How do we know what they're learning? They can't even explain it themselves. So, Eric, you're talking to a civilian whose understanding of AI is basically limited to the movie Her and Ex Machina. So when you say AI, in the simplest definition possible, what
What do you mean when you use that phrase? Well, the movie Cur is much closer than Ex Machina. In my community, many people are obsessed with a movie called Ready Player One.
where virtual worlds of avatars are created and you go from a dismal existence in a dystopian future Earth into this dynamic and interesting world where you are younger and more powerful, your friends are younger and more powerful, and everything is exciting.
Well, I tried Oculus for the first time yesterday. And aside from the nausea, I sort of saw the future. I saw I feel like what you're describing. Oculus is a good example of where it's going to go, although it would be seen as very primitive 10 years from now, I think.
And roughly 20% of people have trouble with the 3D glasses from a standpoint of nausea and balance. So people are working on this. These problems are all going to get solved. It's still early. But over the next decade or two, you'll have immersive worlds that will be very interesting. In the short term, the biggest thing that's going on, there's a couple of things. In the book, we talk about a game called AlphaGo, where the computer discovered moves that
in a game called Go that's played largely in Asia, a very famous and long-standing game, 2,500 years old. The computer discovered moves and strategies that no human had ever discovered. Now, is that an artifact of that or is that a real discovery? Is that like a major message to us that these things are smart?
We also talk about a drug called Halicin, which was developed at MIT by a group of scientists that had smart humans who set the computer off to look against 100 million compounds to find a new antibiotic. And it appears that they have found one. And they did it in a really clever way. And we talk about something called GPT-3, which are technically known as large language models. And there what you do is you have the computer read everything it can find, and then you figure out what it knows.
So on our book, for example, we ask the GPT-3, can you describe what it's like to be a human? And the GPT-3's answer is, no, I'm not human. I'm not a reasoning machine. I'm a large language model, and I have limitations in what I can do. Now, is that intelligence or is that scripting? We don't really know.
What's going to happen is these conversational systems, the most recent one announced is something called Lambda from Google, which looks very impressive. There are many others coming. These large language models will be able to say, you'll give it a seed. You'll say, I'll give you another example from yesterday. William Gibson writes a science fiction story about Voltaire.
That's the entire thing you say to it. And then it produces science fiction around the historical figure of Voltaire in the William Gibson style. That's how powerful these things are. Notice that I said something very important. I said the human told it what to do. My own opinion, I want to say very clearly, is going back to the killer robot example. We don't yet know how to have these systems have their own volition.
their own design, their own task. They're going to be incredibly powerful assistants for you and what you do and me and what I do and everyone else we know. It's going to be a revolution in intelligence and capability for every human being. But as partners, it's not at all clear how we get to them being on their own, them being the computers. Okay, well...
When it comes to the dangers posed by AI, the first thing that comes to mind for me are these viral videos from Boston Dynamics. And in some of them, these robots that look like a mix of a dog and a horse move so much like living animals. But they can also open doors. They can go upstairs. And then there are these other videos that show these human-shaped robots.
And they dance, they move, they do parkour, and they look like they're alive. It's really uncanny. And you sort of have two feelings when you look at it. On the one hand, it's amazing. But on the other hand, you look at it and think...
That can conquer us. And I'm curious what you see. So Boston Dynamics was a Google subsidiary for some years and I visited them multiple times. I'll never forget walking into their warehouse and they had one of these standing robots on a stone bed, human size, six feet tall. And you can see the videos of it. And it was in chains.
And of course, it's computer programmed. It can't do its own thing on its own. It's not going to attack me and say it doesn't like Eric, especially since I represented its owner. But nevertheless, I had the same fear that you just had because of the shackles that were on the robot. So I would encourage you, as entertaining as that is, that's not what we're building right
Boston Dynamics builds very specialized robots for transportation up and down hills. They use animal mechanics to do amazingly powerful things that animals can do that humans can't and that now these robots can do. But they're under direct control of a human being. They're not AI robots.
The most likely scenario, and of course this is all projection, is that you will slowly find your world surrounded by AI systems that in general make your life better. But let me give you an example of a conundrum. Let's imagine you have a Google engineer who's brilliant and they build a system which perfectly optimizes the traffic in D.C. or New York.
They fill the city streets perfectly. So there are arbitrary delays to make sure that the total delivered speed is the best. And you have a woman who's pregnant and she's about to have a child and there's no go faster button in the car because I'm having an emergency. So that's an example of this point where the AI system could end up controlling humans and we might have a loss of freedom that we don't understand we gave up.
The fact that in the case today, if the scenario that I'm giving occurred, this woman would be going as fast as she can in whatever transportation she can. And if a policeman stopped her, that policeman, man or woman would say, let me assist you in getting to the hospital. That's how our system works. How do you program that into these systems? We don't know yet. But I worry about a lot of things. But I worry that these systems, as they get so very efficient...
end up restricting some of the personal freedoms that we take for granted in terms of human motion and human intelligence and activity, the libertarian side of humans. I want to do what I want to do and I want to be free to do it. It's a fundamental principle of the American and I think human existence. I guess I want to push back a little bit on the idea of that's not what we're building because
People start off building one thing and it can easily become another. Mark Zuckerberg built a tool to decide whether girls at Harvard were hot or not, and now that thing is a totally different machine that's hacked its way into millions and millions of minds, shaping their views of the world, their society, and themselves. So sometimes the thing that it starts off as isn't the thing it turns out to be, right? That's true of technology in general, and your story about Mark and Facebook is quite accurate.
When you think about the killer robot scenario, it's not like people are not going to be watching. So imagine a situation where some evil genius company starts building these things. Somebody's going to notice and somebody's going to regulate it and somebody's going to send the police to arrest the robots. I'm not as worried about that because it's such an obvious danger. I'm much more concerned about the impact of human ability and the identity of humans.
How do you feel about your child having their best friends in their entire childhood be non-human? Is that good or bad? That's a huge experiment. We've had children in humanity for about 70,000 years, as long as humans have been around. We've never run that experiment and now we're going to run it at scale? That seems pretty dangerous.
Eric, looking forward, do you foresee a world where AI could become so powerfully intelligent and productive that it would sort of make us look like monkeys? Let me give you the most likely positive scenario. Okay. And I can give you the most likely negative scenario. But the most likely positive scenario goes something like this. And this is speculation over decades.
So the digital transformation of the world will ultimately allow everyone to live in the world at the level that, say, a millionaire lives today. They'll have good housing. They'll have good access to school. They won't have to till the soil because food will be grown indoors and it'll be really good food and very healthy. Health care will be universally available. You know, all of these utopian views that we deliver this to all of humanity. At that point, what will humans do?
Humans are not going to just sit around in a kumbaya moment and, you know, watch the equivalent of television at the time. They'll build complexity. They'll compete over attention and they'll compete over who gets the best orchestra seats and things like that. But it'll be a pretty good life.
At the same time, in order to achieve what I just described, not only do you have to have these intelligent systems that help us invent the technologies I just described, cheap urban farming, manufacturing of high-quality housing inexpensively, new materials, new physics, new distribution and so forth, but you're also going to have to have the systems working harder on their own to invent the future.
And there are people who believe that that will occur. The estimates are typically 20 to 30 years from now. Who knows? And one person, Ray Kurzweil, calls this a singularity. And what he says is that there is a point at which the computers are learning faster than we are, and then they begin to escape our control.
And my answer, again, is that at the point at which the computers get a little uppity, if you will, some enterprising human is going to go turn off the power supply and say, stop now. So I'm not worried about oversight, but I do think there's an optimistic scenario where the computers, as you describe, get so much ahead of us that you could have this concern.
After the break, America's AI competition with China and what happens if AI technologies aren't aligned with human goals. We'll be right back. Hey, guys, Josh Hammer here, the host of America on Trial with Josh Hammer, a podcast for the First Podcast Network. Look, there are a lot of shows out there that are explaining the political news cycle, what's happening on the Hill, the this, the that.
There are no other shows that are cutting straight to the point when it comes to the unprecedented lawfare debilitating and affecting the 2024 presidential election. We do all of that every single day right here on America on Trial with Josh Hammer. Subscribe and download your episodes wherever you get your podcasts. It's America on Trial with Josh Hammer. There are some people like the A.I. theorist Eliezer Yudkowsky who worry about the current incentives around the development of A.I.,
They say they're dangerous because of what they call the alignment problem, as in we're making these powerful things that have never existed before. And we need to make sure that by the time we turn them loose in the world, we're confident that they're sufficiently aligned with our human goals.
And right now, the incentives to big companies like Facebook or Google or whoever is not to make the safest, most aligned human-aligned AI, but to make the first AI to win the artificial intelligence race and dominate it in the way that Google dominates online search. Are they right to worry? They should worry, although some of the narrative is not correct.
The reason they should worry is that the competition will be broadly among many companies, but they're not competing for one prize. They're competing for many prizes. And so what is true now that was not true 50 or 60 years ago, for example, in the nuclear age, was that the vast majority of the innovation is occurring within these companies because they have large pools of computers and the scientists that are building these need lots and lots of horsepower, lots and lots of computers.
So the way to understand it is these tools are very, very powerful. They're going to be embedded in our lives. They're going to have to be regulations to control some of the excesses.
I'll give you an example about advertising. A simple way to understand the problem in social media is that a social media company wants to maximize its revenue. And the best way to maximize revenue is to maximize engagement, that is use. And the best way to maximize use is to maximize outrage. So now you understand why we're outraged all day. Outrage this, outrage that, and so forth. It serves the economic interests of the advertisers.
They may be directly or indirectly participating in these decisions, but this is how the algorithms work. The solution, by the way, is to regulate the parts of that that you don't like. It's not to prohibit it. After the so-called Facebook files were released,
I was interested to see that some of the insiders and former insiders at Facebook blamed the AI-ification, that is the amplification of the feed. If you remember, Facebook and Twitter used to have a historic feed. It was linear. Here it is, here it is. It's like what's going on currently, and they changed it to make it more targeted. And by making it more targeted, they made more money, but it became more personal.
The shared information space that we live in could be fractured by this. But let me say it as a more general issue. We have to build these systems with human values, and we have to understand that we're playing with fire.
You're dealing especially with young people whose identities are not fully formed, whose value systems are not fully formed. We really need to have the equivalent of the textbook conversation in school. In the school districts, they have these huge fights over what's in a textbook and what's not. And the kid goes home, is on the Internet, and does whatever they want. That's a problem. That has to get addressed.
You keep referring to regulation and how all we need to do is regulate it and that's going to make it okay. But how can you assume regulation will happen when the people who are running Washington still are using AOL email addresses? Well, there's a problem of not only do you have competent regulators, which is always a question in my mind, but there's another question of how do you do it? One of the open questions is how do you restrict some of these AI systems? The technical answer is
is that AI systems maximize an objective function mathematically. So if you say you want more of something, the AI system will figure out a way to get more of something. Well, if you maximize revenue, that's what a corporation is supposed to do. So how are you going to write a rule that says that this is an unsolved problem? I'm curious about what you anticipate that AI is going to do
Putting aside what it's going to do fundamentally to our humanity and what it means to be human, which is sort of mind-bendingly big, but what it's going to do to our economy and our politics. I'm thinking now of, you know, when Uber first came out and how much chaos resulted from it.
You could say creative destruction or chaos, but especially in places like New York, people blamed Uber for the decimation of the taxi business. Drivers committed suicide. And this was over an app that allows you to hail a cab with your phone. So if that was the straw that broke the camel's back, now you're suggesting let's put millions of straws on the backs of millions of camels. What do you think is going to happen?
We don't think society is ready for the implications of a fully digital AI powered society. That these amplification systems, which really do disrupt existing networks, have a cost, but they also have a benefit.
The political conclusion at Uber, as you know, in New York City, was they raised taxes on all the ride carriers, including taxis. But they didn't do the one thing that they should have done, which is increase the number of taxicab medallions in a competitive scenario. They didn't do that and they haven't done it for 50 years. So that's a good example where we should debate what was the appropriate response.
In Uber's case, Uber is ultimately seen as an essential service in many of the cities that it operates in. So it was disruptive, but today it's accepted. I worry that the impact on human psychology and our identity will be disruptive in a way that we don't understand and we can't fix. Well, my impulse, maybe it makes me a Luddite, is to say, you know,
stop, unplug it all. But obviously, you don't think we should stop it. And I guess I want to ask, why don't you? First, I think there are enormous benefits to
in terms of knowledge and learning and education. Imagine if every human on the planet has access to the best education, highly targeted to them for free. Those products are in development now. I defy you to argue against educating the entire world. It's got to be a good thing. It has to produce societal wealth, economic wealth, and so forth.
What I do think is that people will spend much of their day interacting with these systems. Sometimes they'll understand how the systems work. Sometimes they won't. With respect to regulation, I think we start with regulations about transparency. Tell us how you got here. Tell us what this thing does, what information it uses, how it makes its decisions so we can decide if we want to use it.
And the reason you can't stop is that every country is doing this. And America is, of course, the great innovator, the great global leader in most markets.
China has announced that it wants to be the world's global leader and dominate the field by 2030. They're already leading in key industries, obviously surveillance, but also electronic commerce. They're working very hard on quantum, working very hard on these large language models. They have more people, more money, lots of PhDs. Their quality of work is very good. How would you feel, again, if you were sitting here in the United States
or in the West, and you're using a system which has been trained in China and has embedded in it the rules and culture of the Chinese Communist Party, you wouldn't be comfortable.
It's a different system. So you're going to have to compete. We are going to have to compete. We're going to have to build these things. But it goes back to what you said earlier. We're going to have to build it with human values, with the values that we care about. And if we are unclear as to what our values are, we need to discuss it now. Okay. Well, given that you just brought up China, let's go to China. When it comes to America's AI competition with China, you've said that while we're still ahead, we're bound to fall behind in the next five to 10 years.
Do you think there's still time to course correct? And what's at stake if we lose that competition? I'll give you a couple of examples. Everybody in our audience here knows about TikTok. TikTok's a Chinese company that's growing faster than any company that we've seen in tech. And it's impressive. Organized in China, run globally in different data centers.
So let's say that there's something that TikTok does. There was a recent story about the impact it had on potentially anorexic girls. I don't know if it's true, but that was the claim. Let's say it's true. Who do you complain to? Who do you call? You call the U.S. government? The U.S. government has nothing to do with it. You call the Chinese government? Try calling them. Who do you call? Where's the phone number? Try calling the president and his number too. You won't get very far.
So we really don't want the global platforms to be originated outside of Western values, and we would prefer that they be built in the United States. I was fortunate enough to be the chairman of an AI commission for the Congress where we said that the creation of these AI-enabled industries, global industries, was a $50 trillion business opportunity.
As you know, in the United States, the vast majority of the shareholder and stock market growth has been because of these global export companies, including the large tech companies, which everyone is busy criticizing. We all benefit from that. The jobs, the wealth, the stock market, the secondary things and so forth. I would hate to have the key industries that we lead in in America be dominated by a non-Western company, and in particular in China, because they'll take all the money.
It'll really impoverish Americans. So we have to make sure we win. What do you say to the people, Eric, who say that American businesses are putting money into the pockets of the Chinese Communist Party, are essentially fueling a modern-day Nazi party, and that history will remember it that way? I'm sure that's not true. And the reason is not because the American companies don't necessarily want to, but because the Chinese companies don't want it.
The doctrine in China right now is called "Made in China 2025," and they are working very, very hard to get rid of these pesky outside influence types like the Americans because they want to control the narrative. The highest priority for the Chinese Communist Party is control and remaining in power, and they've done apparently a pretty good job for that list if you look at where they are today. We'll see if that continues.
And I think there's every evidence that they're trying to eliminate Western influence and make sure that if they steal it or take it, they take it for their own reasons and not because some American company tried to help them. Recently, the head of Bridgewater, Ray Dalio, went on TV and said that
not only does he not have a problem helping the Communist Party of China get richer and richer, but he took it further. And then I look at the United States and I say, well, what's going on in the United States? And should I not invest in the United States because other things are not our own human rights issues or other things? You know, and I'm not trying to make political comparisons. I'm basically just trying to follow the rules, understand what's going on and invest in
But Ray, you recognize, I think that what's going on in the United States and there are, look, there are things that happen in the United States that I don't agree with, that I imagine you don't agree with. But I think that those things are different than some of the things we see happening in China. People aren't, the government isn't disappearing people, for example. And he actually compared China's government to that of a strict parent. The United States is a country of individuals and individualism.
He said in China, it's an extension of the family and it's very much a top down. And as a top down country, what they're doing is that it's that kind of like a strict parent. They behave like a strict parent. And added something along the lines of who are we to judge given the bad things that we do as America.
More recently, Nobody cares about what's happening to the Uyghurs, okay? You bring it up because you really care and I think that's nice that you care. The rest of us don't care. Chamath, one of the owners of the Golden State Warriors went viral on Twitter when he took things even further and just said, let's be honest, nobody cares what's happening to the Uyghurs. I'm just telling you a very hard, I'm telling you a very hard ugly truth, okay? Of all the things that I care about, yes, it is below my line.
Are they just saying out loud what other business leaders are saying quietly and in private? That's not what I've heard, and I don't recognize those phrases in the people that I speak with.
The vast majority of business leaders that I've spoken with are very concerned that America remain its platform leadership globally because that's where we make our money. And it's also the greatest American export at all. The greatest American export, by the way, is the fact that our global American companies have operations in these countries that
And when they operate in those countries, they operate based on U.S. law. So, for example, they give proper freedom and treatment to women, protected minorities, gay people, that sort of thing, which is a great American export. It helps change those countries and modernizes them. So I don't agree with that. A lot of people should care about the Uyghurs. The issue with the treatment of the Uyghurs is not should we care about. We clearly should. But what would you like to do about it?
And I think in the case of China, the most likely scenario is a very uncomfortable coopetition partnership with China for the next decades. And when I say uncomfortable, I mean very, very uncomfortable. But we are critically dependent upon them for natural goods, recycling our dollars and so forth. And they are still critically dependent for certain innovations in the West. And I think both countries benefit from the exchange at the moment.
There's a rising sense, as you know, and, you know, among some who call themselves national conservatives, but also some on the populist left who say, no, the solution is not greater cooperation with China. The solution is to separate with China. And it's OK if we have to pay a higher premium for our goods. It's worthwhile. What do you say to that argument? We've looked at this disengagement question before.
And it would be pretty disastrous. First, the prices in America would go up a lot. Second, you'd have product shortages that are inconceivable. So you'd have, I'll make a number up, probably a doubling of prices, maybe more. You'd have a huge cost escalation in your local Walmart, cost of goods, inflation, and so forth.
And more importantly, the computing systems and the intellectual systems are largely made with Chinese subcomponents. The same argument is also true for China. Today, China is well behind the West in semiconductors. The leading company in semiconductors is a Taiwanese company called TSMC.
And we judge China many generations behind the leading edge. So they are critically dependent upon those chips. And in fact, they import more chips than oil to give you a sense of how important it is to them.
Both countries have a strong dependence on the other and the global trading system. If they were to unilaterally stop, they would pay a heavy price internally. It would take a long time to recover. So if you were president of the United States now, given what's happening in China, the genocide we know is happening, but also the feeling you have that lots of other people also share, that we are inextricably connected economically at this point,
What would you do? So China today is a similar size as the United States. And at the height of the Cold War, the Soviet Union was roughly a third of the GDP of the United States. So China is a different animal. And Americans today think that we should basically treat China the way we treated the Soviet Union. Mm-hmm.
That strategy is not going to work. I am also one of the people who believes that at least for the next decade, China is going to stick to its primary focus, which is its regional domination and its expansion of economic and economic power to it in the so-called BRI countries. So I think that this is an opportunity for tough talk.
I think it's an opportunity for restrictions of key technologies. ASML, which is a Dutch semiconductor example, is a good decision made by President Trump to restrict that because it keeps the Chinese behind the West in these key areas. There are others.
But I don't think I would take either to a direct war footing unless we were provoked. And I don't think I would do a full disengagement. And if China made moves against Taiwan, do you think that we should interfere? So I worked for the military for many years. I don't anymore. And the military has many, many China attack Taiwan scenarios, which is the dominant one that they talk about.
And my guess is that China will threaten but not attack because the threatening gives China many advantages. It strengthens the nationalism in China. The nationalism is pro-Chinese, pro-CCP.
It allows them to bedevil the West and the evil Americans and so forth. There's a long history of this, which starts from the Boxer Rebellion and the Opium Wars and things against England and English-speaking countries.
To actually begin a conflict, there's the possibility of loss. If you look at this, you see this in Ukraine today. Now, I can obviously be wrong here, but in Russia, Putin benefits right now from being in the subject of everything and everyone's focused on him and they're trying to figure out what concessions to give him. But if he actually moves, there's a possibility of a real bad outcome.
So the Chinese are very intelligent and very good strategists. And the most likely scenario is that there will be a lot of saber rattling. There'll be a lot of pressure. There'll be a lot of restrictions. But I don't see the kind of disaster scenarios, at least in the next decade. After the break, is Google just too big? I asked that of Eric Schmidt and a lot more. We'll be right back. OK, let's talk about big tech. A lot of people feel on the right and on the left that big
Companies like Google have just gotten too big. And there's kind of two policies that have emerged, two suggestions based on American precedent. One is break them up, which is something Elizabeth Warren says a lot, and do what Teddy Roosevelt did to Standard Oil. And two, and this is an idea that was floated recently by Clarence Thomas in a dissenting opinion, but also by VCs like David Sachs, is make companies like Google into common carriers, like the electricity company or Amtrak.
And their argument is basically that a lot of these companies now constitute the public square. David Sachs put it this way. I thought it was really smart. When speech got digitized, the town square got privatized, and the First Amendment got euthanized. If you can't speak online or if your ability to speak online is controlled by a tiny handful of companies with no due process, do you really have a free speech right in this country anymore? Yeah.
What do you think? So the problem I have with those narratives is that some of them I actually like emotionally, but I don't think any of them solve the actual problem. So rather than saying big tech is too big and let's break it up, why don't we talk specifically about a problem? So I'll give you an example. 70% of the online book market, actually the book market now, is controlled by Amazon and
So, if a book is not accepted by Amazon, it is effectively kaput. Now, to me, that's probably more market power than we want in a single company. Too much ability to censor a book. But there is an obvious solution, which is to require that property books at least be carried under common terms at Amazon.
Now, does that require making Amazon a common carrier? Of course not. Furthermore, if you make it a regulated common carrier, I can assure you that the Chinese will win because when you do that kind of heavy regulation, what happens is all the innovation stops because all the lawyers are running everything. So Elizabeth Warren's proposal to split YouTube from Google and the App Store from Apple and so forth, they're not going to solve the problem.
I think in my conversation with you and with others, the vast majority of these complaints are really around content management. And we have to come to some agreement in our society about what speech is permitted online and what amplification. My own view, which I'll say very clearly, is I'm in favor of absolute free speech for humans, including idiots, but I'm not in favor of free speech for robots.
And a lot of the problems you get into are because you've always had crazy people who thought the Earth was flat and so forth, and they have nothing else to do because this is what they do all day, and they promote and promote, and that's fine, they can scream as loud as they want. But now they have amplifiers. And a situation where a falsehood, the anti-vax movement is a well-established example,
where they become mobilized and militarized, and people who are otherwise unsuspecting get recruited into this, that's a problem. We saw this with ISIS terrorism. They were using YouTube videos to motivate. We eventually figured out a way to stop that. And this is a problem that we're going to have to have a conversation about. Since we don't agree in our country what is acceptable speech, although we say we do, but we don't really, we don't have a basis for that regulation.
My personal recommendation would be every human gets to have absolute free speech, but that the amplifications that these systems do are limited to certain kinds of speech that are reasonable, protected and reasonably truthful.
Right now, there are people in tech and a lot of people inside Spotify that are calling for Joe Rogan to be removed or censored for some of the interviews he's done with doctors who say unfounded things about COVID and about the vaccine. What would you do if you were the head of Spotify?
Well, the tech industry in general has for the first time taken content regulation seriously by having rules about falsehoods about COVID. There are also other rules, like you can't call people to kill people and harm people and things like that. And I think the argument around the anti-vaxxers is that they are resulting in the deaths of people. In the middle of a pandemic, I sort of agree with that argument.
I am personally not worried about emotional harm and offending people and so forth. I think that's part of a robust debate in our society. And I would protect the dissidents and the people I disagree with, and I want to hear them. When what they say leads to death...
I think that's a different matter. There are plenty of restrictions on free speech in humanity. You can't, for example, yell "fire" in a crowded theater. It's illegal because people might die. So I think that's an example of a modification to a pure free speech thing that is reasonable.
So if I were looking at Spotify, and of course, Daniel is actually not in the US, I would probably say to Mr. Rogan, we love you. We love the things that you're saying. There's an area here where there's too many falsehoods. That would be what I would do.
The problem, of course, though, is that the definition of misinformation is always changing, right? A year ago, you could get banned from social media if you suggested that the coronavirus originally originated in the Wuhan lab. Now that's, I would say, where most of the consensus is, or at least it's a theory that many smart people are happy to entertain. Inevitably, misinformation is sort of in the eye of the beholder, and
This is connected to the problem that I see inside many of these companies, which is the problem of groupthink, the problem of the fact that most of the people that work in these companies tend to think the same way about the world and therefore might have a skewed understanding of what's acceptable and what isn't based on the groupthink that's happening inside the company, if that makes sense.
So I have a pretty strong opinion that the way you solve misinformation is you do ranking. As CEO of Google, I face this every day. And I would always say the same. Google is in the business of making ranking decisions. One, two, three, four. And we make them every day, billions of times a day. If you look at the majority of the places you get information from, they're not ranking it. They're not trying to figure out what's true and what's not. And Google really tries to do that.
And using ranking in these systems, using the necessary signals, would probably improve the quality of the conversation on both sides. There are many other suggestions of how to address misinformation, but since we don't have a guide and we're unlikely to get government specifications for misinformation, your best choice as an entrepreneur in a company is to rank it in such a way that there is trust around how your system works.
Today, when you - it's easy to complain about Facebook here - when you look at Facebook and you look at a stream, how do you know that that information has any truth at all? Because it's from a peer group. And yet we know that the average person is more likely to believe their peers or their apparent peers or their so-called weak links than they do authorities. Whereas I, for example, am much more likely to trust authority because I'm a scientist and I believe in government and all that kind of stuff. So how do you sort that out? And I think the answer is some form of ranking.
And since I don't think the government is going to come up with an approved definition of information, the tech companies as a rule have a responsibility to police misinformation as best they can with their tools. Give you another example. There was the worst misinformers as measured by death rate and misinformation around covid.
are well understood. And there is a series of companies which are essentially doctors who are information mills around natural pills and so forth, and they're known. And yet the industry is quite slow to censor them, even though they're clearly doing something wrong. My answer is you don't censor them. What you do is you make them available, but you don't promote them. In other words, the information is there, but it's not the first answer. It's not in your feed first. So
I want to ask you, there's kind of this caricature of the American billionaire where it's a person who lives in constant luxury, is floating from yacht to mansion, has incredible athletic trainers to keep them physically fit, takes their private jets around the world, and really often doesn't interact with most normal people and that this distorts their view of where the country is and where most people's ideas are.
Which parts of that are true and which parts of that are totally misinformed and wrong?
You know, America is producing so many wealthy people because we're printing lots of money and the success of our businesses and so forth that I don't think there's a single answer to the question of what does a truly rich person look like today. All the stereotypes are probably true in some expense. I have many friends who have never changed their behavior. They're just literally the same. And other people who've been affected in a profound way. What I've tried to do is I try to
retain my curiosity about the world and all the new things around you. And I can tell you that we're entering a period of enormous innovation, enormous ideas, enormous challenges, and I find them all very interesting. And the easiest way to study them is to talk to other people of all kinds. On the one hand,
Lots of people are getting rich. On the other hand, an increasing number of people that I know, especially young people, feel very disenchanted with capitalism. And I'm talking about conservatives and progressives. Why do you think that is? Well, if you look at a young person today, we've really made their life much harder. And it starts with the cost of housing.
The number one issue that a young person faces today is that with the combination of inflation and real increase in housing prices, the opportunity to own their home, build their family, have appreciation is really getting farther and farther away. That's true, by the way, not just in the US, but around the world. That's an example where if we simply built more housing, we could probably address it because people need housing. So there are plenty of specific things that we could do if we decided to that would address these kinds of frustrations.
This question about the American Dream is to some degree mythical because the American Dream in the 1950s was largely for white educated people post-war. Today there is much less income mobility, there's much less generational ability, people believe that their children will not do as well as they are, people are much less likely to move, that's largely a real estate and employment set of issues. These are societal problems that are not being addressed because we're too focused on the wrong things.
We need to figure out a way to grow the country in terms of including high-skills immigration, more productivity, higher wealth for everybody, better housing, better access to food, and better transportation. Okay, let's do a quick lightning round. Eric, when historians look back, what do you think the long-term legacy of Google will be? Well, all of these companies eventually end. And so there will eventually be, many decades from now, a replacement for Google that will do something different in information.
but it'll be a long time from now and I look forward to discovering what it is.
What's your favorite city in America? Probably New York. Do you believe in God? Yes, absolutely. How else would you explain the soul? Bullish or bearish on America? Bearish at the moment because of our inability to govern ourselves. Latest book that you read? I just finished Alina Chan's book, which is called Viral. It's about her beliefs in the origin of the COVID virus in the Wuhan lab. Are you buying cryptocurrency?
I have. Which? Bitcoin, and I've been losing money as we speak. Do you ever think that AI will be what we think of as conscious? Since we can't define misinformation and we can't define consciousness, this will be a question for philosophers. If you have a computer...
that clearly behaves equal to or better than your teenager, and your teenager cannot explain his or her behavior and the computer can't either, are they different? We don't know. That is a philosophical question. Who is your favorite philosopher? Kant. What is your most recent Google search? I can look it up. Great. Optimizing your token distribution.
How do you organize Web3 token ownership in a DAO? Do you spend time in the metaverse? I don't know what the metaverse is, but I think I do. Where do you get your news? Google News, The Wall Street Journal, The New York Times. And then there are many sub stack and medium posts that I subscribe to along with Twitter.
Do you remember my producer's dad's software product, Mojo, which you demonstrated on stage at Java 1's very first conference in the 1980s? I do not. But I remember doing demos in Java embedded in the Mosaic browser, and that must have been it. What is something that you've changed your mind about? Blockchain. I've spent the last five years as a well-known blockchain critic.
Because as a computer scientist, I looked at the blockchain and I thought that the synchronization, it basically synchronizes every so-and-so, was just way too slow. In the last 18 to 24 months, the community has figured out using what are called layer two protocols, sorry to be so technical, to address those latency problems and to begin to build real powerful distributed systems.
So the true believers, of which I'm not quite one yet, believe that fully distributed systems will be even more powerful than fully centralized systems.
And the delight of my industry is just as we were convinced that the Chinese were going to dominate everything with completely centralized systems and all these big tech companies were going to destroy the world. I'm being facetious here. A new technology comes along, which is the exact inverse of what I just said. And that's why it's so fun to be in this industry today. What are you most proud of?
On a personal basis, I believed in networks and I believed that there would be enormous power in interconnecting humans. I had no idea when I started 45 years ago in this that I would be one of the people who got the world connected to each other. All of your questions are about what happens when the world is all communicating with each other. I'm very proud that we have these questions ahead of us. I think it's wonderful.
Eric, thank you so much. Hope to see you sometime soon or maybe in the metaverse. Thank you very much, Barry. Thank you all.
Thanks for listening. I hope you guys heard our episode earlier this week, which was a debate about American power with Matt Taibbi and Brett Stevens. We want to do more debates because people don't talk to each other enough, especially people who disagree. If you have an idea for a debate topic or suggestions about people you want to hear argue, please email us at tips at honestlypod.com. See you next week.