cover of episode Yuval Noah Harari: This Election Will Tear The Country Apart! AI Will Control You By 2034! The Dark Truth Behind Meta & X!

Yuval Noah Harari: This Election Will Tear The Country Apart! AI Will Control You By 2034! The Dark Truth Behind Meta & X!

2024/9/5
logo of podcast The Diary Of A CEO with Steven Bartlett

The Diary Of A CEO with Steven Bartlett

Chapters

Yuval Noah Harari discusses the potential for AI to surpass human control, not through a Hollywood-style takeover, but through a gradual shift of decision-making power to AI systems in various sectors.
  • AI's potential dominance hinges on decisions made in the coming years.
  • AI systems could run the world through bureaucratic processes rather than a single computer.
  • AI's alien intelligence stems from its distinct decision-making processes compared to humans.

Shownotes Transcript

Quick one, I want to say a few words from our sponsor, NetSuite. One of the most overwhelming parts of running your own business, as many of you entrepreneurs will be able to attest to, is staying on top of your operations and finances. Whether you're just starting out or whether you're managing a fast-growing company, the complexities only increase. So having the right systems in place is crucial. One which has helped me is one called NetSuite. They're also a sponsor of this podcast, and NetSuite is the number one cloud financial system, bringing accounting, financial management, inventory, and HR into one fluid platform.

With this single source of truth, you'll have the visibility and control to make fast, informed decisions, which is crucial in business. I remember the chaos of scaling my first business and trying to keep everything in order. It was an absolute nightmare. And it's tools like NetSuite that make this easier. So if you're feeling the pressure, let NetSuite lighten the load. Head to netsuite.com slash Bartlett and you can get a free download of the CFO's Guide to AI and Machine Learning. That's netsuite.com slash Bartlett.

The humans are still more powerful than the AIs. The problem is that we are divided against each other and the algorithms are using our weaknesses against us. And this is very dangerous because once you believe that people who don't think like you are your enemies, democracy collapses and then the election becomes like a war. So if something ultimately destroys us, it will be our own delusions, not the AIs. We have a big election in the United States. Yes, democracy in the States is quite fragile. But the big problem is

What if... Surely that will never happen. Yuval Noah Harari, the author of some of the most influential non-fiction books in the world today. And is now at the forefront of exploring the world-shaping power of AI. And how it is beyond anything humanity has ever faced before. The biggest social networks in the world, they're effectively going to go for free speech. What is your take on that? The issue is not the humans. The issue is the algorithms. So let me unpack this. In the 2010s, there was a big battle between algorithms for human attention. Now, the algorithms discovered...

When you look at history, the easiest way to grab human attention is to press the fear button, the hate button, the greed button.

The problem is that there was a misalignment between the goal that was defined to the algorithm and the interests of human society. But this is how it becomes really disconcerting. Because if so much damage was done by giving the wrong goal to a primitive social media algorithm, what would be the results with AI in 20 or 30 years? So what's the solution? We've been in this situation many times before in history, and the answer is always the same, which is... Are you optimistic?

I try to be a realist. This is a sentence I never thought I'd say in my life. We've just hit 7 million subscribers on YouTube. And I want to say a huge thank you to all of you that show up here every Monday and Thursday to watch our conversations. From the bottom of my heart, but also on behalf of my team, who you don't always get to meet, there's

almost 50 people now behind the diary of a CEO that worked to put this together. So from all of us, thank you so much. We did a raffle last month and we gave away prizes for people that subscribed to the show up until 7 million subscribers. And you guys loved that raffle so much that we're going to continue it. So every single month we're giving away Money Can't Buy prizes, including meetings with me, invites to our events and £1,000 gift vouchers

to anyone that subscribes to the Diver SEO. There's now more than 7 million of you. So if you make the decision to subscribe today, you can be one of those lucky people. Thank you from the bottom of my heart. Let's get to the conversation. 10 years ago, you made a video that was titled, Why Humans Run the World. It's a very well-known TED talk that you did.

After reading your new book, Nexus, I wanted to ask you a slightly modified question, which is, do you still believe that 10 years from now, humans will fundamentally be running the world? I'm not sure. It depends on the decisions we all take in the coming years. But there is a chance that the answer is no.

that in 10 years, algorithms and AIs will be running the world. I'm not having in mind some kind of Hollywoodian science fiction scenario of one big computer kind of conquering the world. It's more like a bureaucracy of AIs that we will have millions of AI bureaucrats everywhere.

you know, in the banks, in the government, in businesses, in universities, making more and more decisions about our lives, that every day decisions, whether to give us a loan, whether to accept us to a job. And we will find it more and more difficult to understand the logic, the rationale, why the algorithm refused to give us a loan, why the algorithm accepted somebody else for the job.

And, you know, you could still have democracies with people voting for this president or this prime minister. But if most of the decisions are made by AIs and humans, including the politicians, have difficulty understanding the reason why the AIs are making a particular decision, then power will gradually shift from humanity to these new alien intelligences.

alien intelligences? Yeah, I prefer to think about AI. I know that the acronym is artificial intelligence, but I think it's more accurate to think about it as an alien intelligence, not in the sense of coming from outer space, in the sense that it makes decision in a fundamentally different way than human minds. Artificial means or have the sense that we design it, we control it. Something artificial is made by humans.

With each passing year, AI is becoming less and less artificial and more and more alien. Yes, we still design the kind of baby AIs, but then they learn and they change and they start making unexpected decisions and they start coming up with new ideas, which are alien to the human way of doing things. You know, there is this famous example with the game of Go.

that in 2016, AlphaGo defeated the world champion, Lee Sedol. But the amazing thing about it was the way it did it. Because humans have been playing Go for 2,500 years. A board game. A board game, a strategy game developed in ancient China and considered one of the basic arts that any cultivated, civilized person in East Asia had to know.

And tens of millions of Chinese and Koreans and Japanese played Go for centuries. Entire philosophies developed around the game of how to play it. It was considered a good preparation for politics and for life. And people thought that they explored the entire realm, the entire geography landscape of Go.

And then AlphaGo came along and showed us that actually for 2,500 years, people were exploring just a very small bit, a very small part of the landscape of Go. There are completely different strategies of how to play the game that not a single human being came up with in more than 2,000 years of playing it. And AlphaGo came up with it in just a few days. So this is alien intelligence.

And, you know, if it's just a game, but the same thing is likely to happen in finance, in medicine, in religion, for better or for worse. You wrote this book Nexus. Nexus. How do you pronounce it? Nexus. Nexus. I'm not an expert on pronunciations. You could have written many a book. You're someone that's, I think, broadly curious about the nature of life, but also the nature of history. Yes.

For you to write a book that is so detailed and comprehensive, there must have been a pretty strong reason why this book had to come from you now. And why is that? Because I think we need a historical perspective on the AI revolution. I mean, there are many books about AI. Nexus is not a book about AI. It's a book about the long-term history of information networks. I think that to understand what is really...

new and important about AI within perspective of thousands of years to go back and look at previous information revolutions, like the invention of writing and the printing press and the radio. And only then you really start to understand what is happening around us right now.

One thing you understand, for instance, is that AI is really different. People compare it to previous revolutions, but it's different because it's the first technology ever in human history that is able to make decisions independently and to create new ideas independently.

A printing press could print my book, but it could not write it. It could just copy my ideas. An atom bomb could destroy a city, but it can't decide by itself which city to bomb or why to bomb it. And AI can do that.

And, you know, there is a lot of hype right now around AI. So people get confused because they now try to sell us everything as AI. Like you want to sell this table to somebody? Oh, it's an AI table. And this water, this is AI water. So people, what is AI? Everything is AI. No, not everything. There is a lot of automation out there, which is not AI. If you think about a coffee machine,

that makes coffee for you. It does things automatically, but it's not an AI. It's pre-programmed by humans to do certain things, and it can never learn or change by itself. A coffee machine becomes an AI if you come to the coffee machine in the morning and the machine tells, hey, based on what I know about you, I guess that you would like an espresso.

It learned something about you and it makes an independent decision. It doesn't wait for you to ask for the espresso. And it's really AI if it tells you, and I just came up with a new drink. It's called Buffy. And I think you would like it.

That's really AI. When it comes up with completely new ideas that we did not program into it and that we did not anticipate. And this is a game changer in history. It's bigger than the printing press. It's bigger than the atom bomb.

You said we need to have a historical perspective in it. Do you consider yourself to be a historian? Yes, my profession is a historian. This is my training. I was originally a specialist in medieval military history. I wrote about the Crusades and the Hundred Years' War and the strategy and logistics of the English armies that invaded France in the 14th century. This was my first articles.

And this is the kind of perspective or of knowledge that I also bring to try and understand what's happening now with AI. Because most people's understanding of what AI is comes from them playing around with a large language model like ChachiPT or Gemini or Grok or something. That's like their understanding of it. You can ask it a question and it gives you an answer.

That's really what people think of AI as. And so it's easy to be a bit complacent with it or to see this technological shift as being trivial. But when you start talking about information and the disruption of the flow of information and information networks, and when you bring it back through history and you give us this perspective on the fact that information effectively glues us all together, then it starts to become, for me, I think about it completely differently.

I mean, there are two ways I think about it. I mean, one way is that when you realize that, as you said, that information is the basis for everything, when you start to shake the basis, everything can collapse or change or something new could come up. For instance, democracies are made possible only by information technology.

Democracy, in essence, is a conversation, a group of people conversing, talking, trying to make decisions together. Dictatorship is that somebody dictates everything. One person dictates everything. That's dictatorship. Democracy is a conversation. Now, in the Stone Age, hunter-gatherers living in small bands, they were mostly democratic. Whenever the band needed to decide anything, they could just talk with each other and decide.

As human societies grew bigger, it just became technically difficult to hold the conversation. So the only examples we have from the ancient world for democracies are small city-states like Athens or Republican Rome. These are the two most famous examples, not the only ones, but the most famous.

And even the ancients, even philosophers like Plato and Aristotle, they knew once you go beyond the level of a city-state, democracy is impossible. We do not know of a single example from the pre-modern world of a large-scale democracy. Millions of people spread over a large territory, conducting their political affairs democratically.

Why? Not because of this or that dictator that took power. Because democracy was simply impossible. You cannot have a conversation between millions of people when you don't have the right technology. The large scale democracy becomes possible only in the late modern era.

When a couple of information technologies appear, first the newspaper, then telegraph and radio and television, and they make large-scale democracy possible. So democracy, it's not like you have democracy and on the side you have these information technologies. No, the basis of democracy is information technology. So if you have some kind of earthquake,

In the information technology, like the rise of social media or the rise of AI, this is bound to shake democracy, which is now what we see around the world is that we have the most sophisticated information technology in history and people can't talk with each other.

The democratic conversation is breaking down. And every country has its own explanation. Like you talk to Americans, what's happening there between Democrats and Republicans? Why can't they agree on even the most basic facts? And they give you all these explanations about the unique conditions of American history and society. But you see the same thing in Brazil. You see the same thing in France, in the Philippines.

So it can't be the unique conditions of this or that country. It's the underlying technological revolution. And the other thing that history kind of, that I bring from history is how even relatively small technological changes, seemingly small changes, can have far-reaching consequences. Like you think about the invention of writing. Originally, it was basically people playing with mud.

I mean, writing was invented for the first... It was invented many times in many places, but the first time in ancient Mesopotamia, people take clay tablets, which is basically pieces of mud, and they take a stick and they use the stick to make marks in the clay, in the mud. And this is the invention of writing. And this had a profound effect.

To give just one example, you think about ownership. What does it mean to own something? Like I own a house, I own a field. So previously, before writing, to own a field, if you live in a small Mesopotamian village, like 7,000 years ago, you own a field. This is a community affair.

It means that your neighbors agree that this field is yours and they don't pick fruits there and they don't graze their sheep there because they agree it's yours. It's a community agreement. Then comes writing and you have written documents and ownership changes its meaning.

Now, to own a field or a house means that there is some piece of dry mud somewhere in the archive of the king with marks on it that says that you own that field.

So suddenly, ownership is not a matter of community agreement between the neighbors. It's a matter of which document sits in the archive of the king. And it also means, for instance, that you can sell your land to a stranger without the permission of your neighbors simply by giving the stranger this piece of dry mud in exchange for gold or silver or whatever. So

What a big change. A seemingly simple invention, like using a stick to draw some signs on a piece of mud. And now think about what AI will do to ownership. Like maybe 10 years down the line, to own your house means that some AI says that you own it. And if the AI suddenly says that you don't own it, for whatever reason that you don't even know, that's it. It's not yours. That mark on that piece of mud was...

Also the invention of sort of written language. And I think I was thinking about when I was reading your book about how language holds our society together, not in the way that we often might assume as in me having a conversation with you, but passwords, poetry, like banking. It's like our whole society is secured by language. And the first thing that the AIs have mastered is with large language models is the ability to replicate that.

Which is, which made me, I think about all the things that in my life are actually held together with language. Even my relationships now, because I don't see my friends. My friends live in Dubai and America and Mexico. So we conversate in a language. Our relationships are held together in language. And as you said, democracies are held together in language. Yeah.

And now there's a more intelligent force that's mastered that. Yeah, it was so unexpected. Like, you know, five years ago, people said, AI will master this or that, self-driving vehicles. But language, nah, this is such a complicated problem. This is the human masterpiece, language. It will never master language. And Chad GPT came and he said, you know, I'm a words person. And I'm simply amazed by the...

quality of the texts that these large language models produce. It's not perfect, but they really understand the semantic field of words. They can string words together and sentences to form a coherent text. That's really remarkable. And as you said, I mean, this is the basis for everything. Like, I give instructions to my bank with language. If

If AI can generate text and audio and image, then how do I communicate with the bank in a way which is not open to manipulation by an AI? But the tempting part in that sentence is you don't like communicating with your bank anyway. As in calling them, being on the phone, waiting for another human.

So the temptation is, I don't like speaking to my bank anyway, so I'm going to let the AIs do that. I'm going to invest. If I can trust them. I mean, the big question is, I mean, why does the bank want me to call personally to make sure that it's really me? It's not somebody else telling the bank, oh, make this transfer to, I don't know, Cayman Islands. It's really me. And how do you make sure? How do you build this trust? I mean, the whole of finance for thousands of years is just one question, trust.

All these financial devices, money itself is really just trust. It's not made from gold or silver or paper or anything. It's how do you create trust between strangers? And therefore, most financial inventions in the end are linguistic and symbolic inventions. You don't need some complicated physics. It's complicated symbolism.

And now AI might start creating new financial devices and will master finance because it mastered language. And like you said, I mean, we now communicate with other people, our friends all over the world.

In the 2010s, there was a big battle between algorithms for human attention. We're just discussing it before the podcast. How do we get the attention of people? But there is something even more powerful out there than attention. And that's intimacy. If you really want to influence people, intimacy is more powerful than attention. How are you defining intimacy in this regard? Someone that you...

have a long-term acquaintance with, that you know personally, that you trust, that to some extent that you love, that you care about. And until today, it was utterly impossible to fake intimacy and to mass produce intimacy. You know, dictators could mass produce attention,

You know, once you have, for instance, radio, you can tell all the people in Nazi Germany or in the Soviet Union, the great leader is giving a speech, everybody must turn their radio on and listen. So you can mass produce attention. But this is not intimacy. You don't have intimacy with the great leader. Now with AI, you can, for the first time in history, at least theoretically, mass produce intimacy. With millions of bots, maybe working for some government,

faking intimate relationships with us, which will be hard to know that this is a bot and not a human being. It's interesting because when I had so many conversations with relationship experts and a variety of people that speak to the decline in

human to human intimacy and the rise in loneliness and us becoming more sexless as a society and all of these kinds of things so it's it's almost with the decline in

human to human intimacy and human to human connection and the rise of this sort of art of the possibility of artificial intimacy it begs the question what the future might look like in a world where people are lonelier than ever more disconnected than ever but still have the same maslovia need for that connection and that feeling of you know love and longing and maybe this is why we're seeing a rise in polarization at the same time because people are

desperately trying to belong somewhere and the algorithm is like reinforcing my echo chamber so you know and it's but i don't know how that ends i don't think it's deterministic it depends on the decision we make individually and as a society

There are, of course, also wonderful things that this technology can do for us. The ability of AI to hold a conversation, the ability to understand your emotions, it can potentially mean that we will have lots of AI teachers and AI doctors and AI therapists that can give us better healthcare services, better education services than ever before.

Instead of being, you know, a kid in a class of 40 other kids that the teacher is barely able to give attention to this particular child and understand his or her specific needs and his or her specific personality, you can have an AI tutor that is focused entirely on you and that is able to give you a quality of education which is really unparalleled. I had this debate with my friend,

on the weekend, he's got two young kids who are one years old and three years old. And we were discussing in the future in sort of 16 years time, where would you rather send your child? Would you rather send your child to be taught by a human in a classroom, as you've described with lots of people, lots of noise where they're not getting personalized learning. So if the classroom are more intelligent, they're being left behind. If they're more intelligent, they're being dragged back. Or would you rather send

your child sat in front of a screen potentially or a humanoid robot and was given really personalized, tailored education that was probably significantly cheaper than say private educational university.

You need the combination. I mean, I think that for many of the lessons, it will be better to go with the AI tutor, which you don't even have to sit in front of a screen. You can go to the park and get a lesson on ecology, just listening as you walk. But you will need...

large groups of kids for break time. Because very often you learn that the most important lessons in school are not learned during the lessons. They are learned during the breaks. And this is something that should not be automated. You would still need a large group of children together with human supervision for that. The other thing I thought about a lot when I was reading your book is this idea that I would assume us having more information

and more access to information would lead to more truth in the world, less conspiracy, more agreement. But that doesn't seem to be the case. No, not at all. Most information in the world is junk. I mean, I think the best way to think about it is it's like with food, that there was a time, like a century ago in many countries, where food was scarce. So people ate whatever they could get, especially it was full of fat and sugar.

And they thought that more food is always good. Like if you ask your great grandmother, she would say, yes, more food is always good. And then we reach a time of abundance in food. And we have all this industrialized processed food, which is artificially full of fat and sugar and salt and whatever. And it's obviously bad for us. The idea that more food is always good, no.

And definitely not all this junk food. And the same thing has happened with information. That information was once scarce. So if you could get your hands on a book, you would read it because it was nothing else. And now information is abundant. We are flooded by information. And much of it is junk information, which is artificially full of greed and anger and fear because of this battle for attention. Yeah.

And it's not good for us. So we basically need to go on an information diet. Again, the first step is to realize that it's not the case that more information is always good for us. We need a limited amount, and we actually need more time to digest the information. And we have to be, of course, also careful about the quality of what we take in because, again, of the abundance of junk information.

And the basic misconception, I think, is this link between information and truth. That people think, okay, if I get a lot of information, this is the raw material of truth. And more information will mean more knowledge. And that's not the case. Because even in nature, most information is not about the truth. The basic function of information in history, and also in biology, is to connect. Information is connection.

And when you look at history, you see that very often the easiest way to connect people is not with the truth, because the truth is a costly and rare kind of information. It's usually easier to connect people with fantasy, with fiction. Why? Because the truth tends to be not just costly, the truth tends to be complicated and it tends to be uncomfortable and sometimes painful.

If you think, you know, like in politics, a politician who would tell people the whole truth about their nation is unlikely to win the elections. Because every nation has these skeletons in the cupboard and all these dark sides and dark episodes that people don't want to be confronted with.

So we see that politically, it's not, if you want to connect nations, religions, political parties, you often do it with fictions and fantasies. And fear? I was thinking about sapiens and the role that stories play.

in engaging our brains. And I was thinking a lot about the narratives. In the UK, we have a narrative where we're told that much of the cause of the problems we have in society, unemployment, other issues with crime, are because there's people crossing from France on boats. And it's a very effective narrative to get people to band together to march in the streets. And in America, obviously, the same narrative of the wall and the southern border

they're crossing our border in the millions, it's their rapists, they're not sending their good people, they're coming from mental institutions, has galvanized people together. And those people are now like marching in the streets and voting based on that story that is a fearful story. It's a very powerful story because it connects to something very deep inside us.

And if you want to get people's attention, if you want to get people's engagement, so the fear button is one of the most efficient, most effective buttons to press in the human mind. And again, it goes back to the Stone Age. So if you live in a Stone Age tribe, one of your biggest worries is that the people from the other tribe will come to your territory and will take your food or will kill you.

So this is a very ingrained fear, not just in humans, in every social animal. They did experiments on chimpanzees that show that chimpanzees have also a kind of almost instinctive fear or disgust towards foreign chimpanzees from a different band. And politicians and religious leaders, they learn how to play games.

on these human emotions are almost like you're playing a piano. Now, originally, these feelings like disgust, they evolved in order to help us. You know, on the most basic level, disgust is there because, you know, especially as a kid,

You want to experiment with different foods. But if you eat something that is bad for you, you need to puke it. You need to throw it out. So you have disgust protecting you. But then you have religious and political leaders throughout history hijacking this defensive mechanism and teaching people from a very young age not just to fear, but to be disgusted by foreign people, by people who look different.

And this is, again, as an adult, you can learn all the theories and you can educate yourself that this is not true, but still very deep in your mind. If there is a part that is just, these people are disgusting, these people are dangerous. And we saw it throughout history, how many different movements have learned how to use these emotional mechanisms effectively.

to motivate me. We sit down at a very interesting time, Yuval, because two quite significant things have happened in the last, I think, year as it relates to information and many of the things we've been talking about. One of them is Elon Musk bought Twitter and his real mandate has been this idea of free speech. And as part of that mandate, he's unblocked

a number of figures who were previously blocked on Twitter. A lot of them right-leaning people that were blocked for a variety of different reasons. And then also this week, Mark Zuckerberg released basically a letter publicly. And in that letter, he says that he regrets the fact that he cooperated so much with the FBI, the government, when they asked him to censor things on Facebook. One particular story, he says he regrets doing that. And it looks like, if you read between what he's saying,

well, he actually says explicitly, he says, we're going to push back harder in the future if governments or anybody else asks us to censor certain messaging. Now, what I'm seeing is that Twitter, which is one of the biggest social networks in the world, and Meta, the biggest social network in the world, have now taken this stance and effectively they're going to let information flow. They're effectively going to go for this free speech narrative. Now, as someone that's used these platforms for a long time, specifically X or Twitter,

It is crazy how different it is these days. There are things that I see every time I scroll that I never would have seen before this free speech position. Now I'm not taking a stance, whether it's good or bad, it's just very interesting. And there's clearly an algorithm that is now really like, if I scroll, if I go on X right now, I will see someone being killed with a knife. I reckon within 30 seconds and I will see someone getting hit by a car. Um,

I will see extreme Islamophobia, potentially. But then I'll also see the other side. So it's not just I'm seeing, I'll see all of the sides. And when you were talking earlier about like, is that good for me? I had a flashback to my friend this weekend. It was my birthday. So me and my friends were together, just looking over at him, mindlessly scrolling these like horror videos on Twitter as he was sat on my left thinking, God, he's like frying his dopamine receptors. Yeah.

I just think this whole new free speech movement, what is your take on this idea of free speech in the world? Only humans have free speech. Bots don't have free speech. The tech companies are constantly confusing us about this issue because the issue is not the humans. The issue is the algorithms. And let me explain what I mean.

If the question is whether to ban somebody like Donald Trump from Twitter, I agree this is a very difficult issue and we should be extremely careful about banning human beings, especially important politicians, from voicing their views and opinions. However much we dislike their opinions or them personally, it's a very serious matter.

to ban any human being from a platform. But this is not the problem. The problem on the platform is not the human users. The problem is the algorithms and the companies constantly shift the blame to the humans in order to protect their business interests. So let me unpack this.

Humans create a lot of content all the time. They create hateful content. They create sermons on compassion. They create cooking lessons, biology lessons, so many different things. A flood of information. The big question is, then what gets human attention? Everybody wants attention.

Now, the companies also want attention. The companies give the algorithms that run the social media platforms a very simple goal. Increase user engagement. Make people spend more time on Twitter, more time on Facebook, engage more, sending more likes and recommending it to their friends. Why? Because the more time we spend on the platforms, the more money they make. Very, very simple.

Now, the algorithms made a huge, huge discovery. By experimenting on millions of human guinea pigs, the algorithms discovered that if you want to grab human attention, the easiest way to do it is to press the fear button, the hate button, the greed button. And they started recommending to users

to watch more and more content full of hate and fear and greed to keep them glued to the screen. And this is the deep cause of the epidemic of fake news and conspiracy theories and so forth. And the defense of the companies is we are not producing the content. Somebody, a human being, produced a hate-filled conspiracy theory about immigrants, and it's not us.

It's like a bit like, I don't know, the chief editor of the New York Times publishing a hateful conspiracy theory on the front of the first page of the newspaper. And when you ask him, why did you do it? Or are you blaming? Look what you did. He said, I didn't do anything. I didn't write the piece. I just put it on the front of the New York Times. That's all. That's nothing. It's not nothing. People are producing immense amount of content.

The algorithms are the kingmakers. They are the editors now. They decide what gets viewed. Sometimes they just recommend it to you. Sometimes they actually autoplay it to you. Like you chose to watch some video. At the end of the video, to keep you glued to the screen, the algorithm immediately, without you telling him it, the algorithm, without you telling the algorithm, the algorithm autoplays it.

some kind of video full of fear or greed just to keep you glued to the screen. It is the algorithm doing it. And this should be banned or this should at least be supervised and regulated. And this is not freedom of speech because the algorithms don't have freedom of speech. Yeah, the person who produced the hate-filled video, I would be careful about banning them. But that's not the problem. It's the recommendation which is the problem.

The second problem is that a lot of the conversations now online are being overrun by bots. Again, if you look, for instance, at Twitter X as an example, so people often want to know what is trending, which stories get the most attention. If everybody's interested in a particular story, I also want to know what everybody's talking about.

And very often it's the bots that are driving the conversation. Because a particular story initially gets a lot of traction, a lot of traffic, because a lot of bots retweet it. And then people see it and think they don't know it's bots. They think it's humans. So they say, oh, lots of humans are interested in this. So I also want to know what's happening. And this draws more attention. This should be forbidden.

Bots are very basically you cannot have AIs pretending to be human beings. This is fake humans. This is counterfeit humans. If you see activity online and you think it's human activity, but actually it's bot activity, this would be banned. And it doesn't harm the free speech of any human being because it's a bot. It doesn't have freedom of speech.

I was thinking a lot about what you said about these algorithms are actually running the world. And I mean, yeah, so if the algorithms are deciding what I see based on what I spend my time looking at, because they want to make, you know, the platforms want to make more money. And if I have a innate sort of predisposition to spend more time focused on things that scare me, or then you just have to give me a couple of years to

And every year that goes past, I'll become more fearful, more scared. It reinforces your own weaknesses. It's like the food industry. So the food industry discovered we liked food with a lot of salt and fat in it and gives it more to us. And then it says, but this is what the customers want. What do you want from us?

It's the same thing, but even worse with these algorithms that because this is the food for the mind. Yes, humans have a tendency that if something is very frightening or something fills them with anger, they focus on it and they tell all the friends about it.

But to artificially amplify it, it's just not good for our mental health and social health. It is using our own weaknesses against us instead of helping us deal with them. Is it fair to say, now this is me just jumping to conclusions a little bit, but is it fair to say that in a world where you remove...

restrictions around blocking certain characters, right-wing characters that are their messages maybe based on immigration, et cetera. You remove those restrictions. So they're all allowed on every platform. And then you program the algorithm to be focused on revenue that eventually more people will become right-wing. And I say that in part because it's a right-wing narrative to say that immigrants are bad and that, you know, I'm not saying that the left are innocent because they're absolutely not.

But I'm saying that the fearful narratives, the fear seems to come more from the right, in my opinion. Like, especially in the UK, it was the fear comes from immigrants and these people are going to take your money and all these kinds of things. I think the key issue is to not to label it as a right or left issue. Because, again, democracy is a conversation. And you can have a conversation only if you have several different opinions, right?

And I think it should be okay to have a conversation about immigration, that people should be able to have different opinions about it. That's fine. The problem starts when one side vilifies and demonizes immigration.

Anybody who doesn't think, and you see it to some extent from both sides, but in the case of immigration, so you would have these conspiracy theories that anybody who supports immigration, for instance, they want to destroy the country. They are part of this conspiracy to flood the country with immigrants and to change its nature and whatever. And this is the problem.

that democracy, once you believe that people who don't think like you, they are not just your political rivals. They are your enemies. They are out to destroy you. They intend to destroy your way of life, your group. Then democracy collapses because there can be no way. Between enemies, democracy doesn't work. It works if you think that the other side is wrong,

But there are still essentially good people who care about the country, who care about me, but they have different opinions. If you think that they are my enemies, they try to destroy me, then the election becomes like a war because you're fighting for your survival. You will do anything to win the election because your survival is at stake.

If you lose, you have no incentive to accept the verdict. If you win, you only take care of your tribe and not of the enemy tribe. What if you don't believe the election is legitimate? Then democracy can't function.

This is the basic democracy can't exist in just any, it's like a delicate plant that needs certain conditions in order to survive and to flourish. And one condition, for instance, is that you have information technologies that allows a conversation. Another condition is that you trust the institutions. If you don't trust the institution of elections,

It doesn't work. And a third condition is that you need to think that the people on the other side of the political divide, they are my rivals, but they are not my enemies. Now, the problem with what's happening now with democratic conversations is because of this tendency to go to more and more and more extremes.

it creates the impression that the other side is an enemy. And this is a problem not just for the right, also for the left. That on both sides you see this feeling that the other side is an enemy and that its positions are completely illegitimate. And if we reach that point, then the conversation collapses.

And it should be possible to have complex conversations and discussions about difficult issues like immigration, like gender, like climate change, without seeing the other side as an enemy, which was possible for, you know, for generations. So why is it that now it seems to just become impossible to talk with the other side or to agree about anything?

We have a big election in the United States this year. Very big one, yeah. Do you think a lot about it? Yes, yes. I mean, it seems like a very late, it'll be a coin toss, I mean, like 50-50. You know, elections become really an existential issue if there is a chance they will be the last elections.

If one side is intense to simply change the rules of the game, if it comes to power, then it becomes existential. Because again, democracy works on the basis of self-correcting mechanisms, that this is the big advantage of democracy over dictatorship.

In a dictatorship, a dictator can make a lot of good decisions, but sooner or later they will make a bad decision. And there is no mechanism in a dictatorship to identify and correct such mistakes. Like Putin. Yeah. There is just no mechanism in Russia that could say Putin made a mistake. He should go. He should let somebody else try a different course of action.

This is the great advantage of democracy. You try something, it doesn't work, you try something else. But the big problem is, what if you choose someone who then changes the system, neutralizes its self-correcting mechanism, and then you cannot get rid of them anymore?

This is what happened, for instance, in Venezuela, that originally Chavez and the Chavista movement, they came to power democratically. People wanted to, hey, let's try this. And now in the last elections a couple of weeks ago, evidence is very, very clear that Maduro lost big time. But he controls everything, the election committee, everything. And he claims, no, I won.

And they destroyed Venezuela. You know, it's something like a quarter of the population fled the country, which was one of the richest countries in South America before. And they just can't get rid of the guy. Surely that will never happen in the West. Oh, don't say never in history. History can catch up with you, whoever you are.

And that's one of the illusions we kind of... Venezuela was part of the West in many ways, still is. This is one of the illusions we live under, though. We think that that can never happen to the UK or the United States or Canada, these sort of quote unquote civilized nations. Mm-hmm.

According to some measurements, democracy in the United States is quite new and quite fragile, if you think about it in terms of who gets to vote, for instance. I don't know what are the chances, but even if there is a 20% chance,

That a Trump administration would change the rules of the game of American democracy in such a way as to make it, for instance, by changing the rules about who votes or how do you count votes, that it will become almost impossible to get rid of them. That's not out of the possible in historical terms.

Do you think it's possible that Trump will do that? Yes. I mean, you saw it on the 6th of January. I mean, the most sensitive moment in every democracy is the moment of transfer of power. And the magic of democracy is that democracy is meant to ensure a peaceful transfer of power. But as I said, like you choose one party, you give them a try. After some time, if people say they didn't do a good job, let's try somebody else.

And, you know, you have people who hold in the United States, they hold the biggest power in the world. The president of the United States have enough power to destroy human civilization. All these nuclear missiles, all this arming. And he loses the election and he says, OK, I give up all this power and I let the other guy try. This is amazing. And this is exactly what Trump didn't do.

From the beginning, I mean, even from 2016, they asked him directly, if you lose the election, will you accept the results? And he said no. And in 2020, he did not hand power peacefully. He tried to prevent it. And the fact that he's now running again, and I think to some extent the lesson he got from the 6th of January is that I can basically get away with anything.

at least with my people, with my base, that it was like a test, a try. If I do this extreme thing and they still support me afterwards, it basically means they will support me no matter what I do. I'm wondering in a world of such a fragile democracy, when information flows and networks are disrupted by something like AI, if

misinformation and disinformation and the ability for me to make a video, I could make a video right now of Donald Trump speaking and saying something in his voice. And I could help that video go viral. Like, how do you hold together

democracy and communication when you don't believe anything that you're seeing online and we're just at the start of this now so we haven't seen anything yet this is just really the first baby steps i'm gonna play a video on the screen right now so people can see and for those listening you'll just hear it i'm gonna play a video that isaac over there in the corner of the room made of me speaking in this chair and it wasn't me and i didn't say it and i wasn't in this chair

Hey there, this is AI Steve. Do you think I'll be able to take over the diary of a CEO one day? Leave your comments below. And it sounds exactly like me, identical, and it's not me. And I wonder this with, you know, most of us get our political information and our information generally now from social media. Yeah. And if I can't believe anything that I'm seeing because it's all easy to make, some kid in Russia in their bedroom can make a video of the prime minister here.

I don't know where we get our information from anymore, how we verify. The answer is institutions. We've been in this situation many times before in history, and the answer is always the same, institutions. You cannot trust the technology. You trust the institution that verifies the information. Think about it like with print, that you can write on a piece of paper anything you want.

You can write the Prime Minister of Britain said, and then you open quotation marks and you put something into the mouth of the Prime Minister. You can write anything you want. And when people read it,

They don't believe it or they shouldn't believe it. Just because it's written that the prime minister said it doesn't mean that it's true. So how do we know which pieces of paper to believe as an institution? We would believe, or greater chance we will believe, if on the front page of the New York Times or of the Sunday Times or of the Guardian, you will have the British prime minister said, open quotation marks, blah, blah, blah.

Because we don't trust the paper or the ink. We trust the institution of the Guardian or the Wall Street Journal or whatever. With videos, we never had to do that because nobody could fake them. So we trusted the technology. If we saw a video, we said, this has to be true.

But when it becomes very easy to fake videos, then we revert to the same principle as with print. We need an institution to verify it. If we see the video on the official website of CNN or of the Wall Street Journal, then we believe it because we believe the institution backing it.

And if it's just something on TikTok, we know that, you know, any kid can do that. Why should I believe it? So now we are in the transition period. We are still not used to it. So when we see a video of Donald Trump or Joe Biden, the video still gets to us because we grew up in a time when it was impossible to fake it. But I think very quickly people will realize you can't trust videos. You can only trust the institutions themselves.

And the question is, will we be able to produce, to create, to maintain trustworthy institutions fast enough to save the democratic conversation? Because if not, if you can't believe anything, this is the ideal for dictators, right?

When you can't trust anything, the only system that works is a dictatorship. Because democracy works on trust, but dictatorship works on terror, on fear. You don't need to trust anything in a dictatorship. You don't trust anything. You fear. For democracy to work, you need to trust, for instance, that some information is reliable, that the election committee is impartial, that the courts are just,

And if more and more institutions are attacked and people lose trust in them, then democracy collapses. But going back to information, so one option is that the old institutions like newspapers and TV stations, they will be the institutions that we trust to verify certain videos or we will see the emergence of new institutions.

And again, the big question is whether we'll be able to develop trust in them. And I specifically say institutions and not individuals. No large-scale society, especially not a democratic society, can function without trustworthy bureaucratic institutions. And will those bureaucratic institutions be AI? That's the big question, because increasingly, AIs will be the bureaucrats.

What do you mean by bureaucrats? What's the word bureaucrat? What does that mean? Oh, that's a very important question because human civilization runs on bureaucracy. Bureaucrats are essentially officials in government that try... Not just in government. I mean, the origin of the word bureaucrat, it comes from French from the 18th century. And bureaucracy means that the rule of the writing desk is to rule the world or to rule society.

with pen and papers and documents. Like the example we gave in the very beginning about ownership. So you own a house because there is a document in some archive that says that you own it. And a bureaucrat produced this document. And if you now need to retrieve it, then this is the job of a bureaucrat to find the right document at the right time.

And all big systems run on it. Hospitals and schools and corporations and banks and sports associations and libraries, they all run on these documents. And the bureaucrats who know how to read and write and find and file documents. One of our big problems is that it's difficult for us to understand bureaucratic systems because

Because they are a very recent development in human evolution. And this makes us suspicious about them. And we tend to believe all kinds of conspiracy theories about the deep state and about what's going on in all these bureaucracies. And it's really complicated. And it's going to be more complicated as more of the decisions will be made by AI bureaucrats.

An AR bureaucrat means that decisions like how much money to allocate to a particular issue will no longer be made by a human official. It will be made by an algorithm. And when people decide, when people ask why is the switch system broken, why didn't they give enough money to fix it? I don't know. The algorithm just decided to give the money to something else. Why will...

bureaucracies be run by AI over people? Like, why will at some point a nation decide that in fact AI is better at making these decisions? First of all, it's not a future development. It's already happening. More and more of the decisions are being made by AIs. And this is just because the amount of information you need to take into account are enormous. And it's very difficult for humans to do it. It's much easier for the AIs to do it.

All these people, you know, bureaucrats, lawyers, accountants, it sounds like I always wonder, you know, what are humans going to be left to do? In your book, you say that AI is going to far...

AI is going so far beyond human intelligence that it should actually be referred to alien intelligence. And if it goes so far beyond human intelligence, it's my assumption that most of the work that we do is based on intelligence. So even like me doing this podcast now, this is me asking questions based on information that I've gathered, based on what I think I'm interested in, but also based on what I think the audience will be interested in. And

Compared to AI, I'm like a little monkey. Do you know what I mean? If an AI has an IQ that is 100 times mine and a source of information that is a million times bigger than mine, there's no need for me to do this podcast. I can get an AI to do it. And in fact, an AI can talk to an AI and deliver that information to a human. But then if we look at most industries, like being a lawyer, accountancy, I mean, a lot of the medical profession is based on information. Yeah.

I think that's the biggest employer in the world is the profession of driving, whether it's delivery or Uber or whatever it is. Where do humans belong in this complex? Anything which is just information in, information out is ripe for automation. These are the easiest jobs to automate. Like being a coder.

Like being a coder or like being an accountant, at least certain types of accountants, lawyers, doctors, they are the easiest to automate. If a doctor, the only thing they do is just take information in, all kinds of results of blood tests and whatever, and they information out, they diagnose the disease and they write a prescription. This will be easy to automate in the coming years and decades.

But a lot of jobs, they require also social skills and motor skills.

If your job requires a combination of skills from several different fields, it's not impossible, but it's much more difficult to automate. So if you think about a nurse that needs to replace a bandage to a crying child, this is much, much harder to automate than just a doctor that writes a prescription. Because this is not just data.

The nurse needs good social skills to interact with the child and motor skills to just replace the bandage. So this is harder to automate. And even for people who just deal with information, there will be new jobs. The problem will be the retraining. And not just retraining in terms of acquiring new skills, but psychological retraining.

How do you kind of reinvent yourself in a new profession and do it not once, but again and again and again, because as the AI revolution unfolds, and we are just at the very beginning of it, we haven't seen anything yet. So there will be old jobs disappearing, new jobs emerging, but the new jobs will rapidly change and vanish. And then there'll be a new wave of new jobs. And people will have to reinvent themselves four, five, six times to stay relevant.

And this will create immense psychological stress.

So many of the big companies are also working at the same time on humanoid robots. There's this humanoid robot race going on. And by humanoid robots, I mean, you know, Tesla have their humanoid robot. I think it's called Optimus, which they're developing and it'll cost, you know, X thousands of pounds. And I watched a video of it recently where it can do quite delicate sort of motor skill based stuff. So probably clean the house. It can probably work on the production line. You can probably put things in boxes, um,

And I just wonder when we say, you know, people are going to lose their jobs in a world where you have humanoid robots and you have intelligence that's beyond us. And you combine the two where these humanoid robots are very, very intelligent. I don't know what I'm like, where did the unemployed go to, to find these new professions? Like, obviously it's difficult to forecast the new professions of the future. History tells us that. Yeah.

But I can't figure out what the new professions are. I mean, my girlfriend does breathwork. I guess the breathwork part is quite easy to disrupt, but then she takes women away for retreats in Portugal and stuff. So I'm like, okay, she's going to kind of be safe because these women are going there to connect with humans and to be in this little special place offline intentionally. So retreats, she'll probably be fine. Anything that, you know, there are things...

things that we want in life, which are not just about solving problems. Like I'm sick, I want to be healthy. I want my problem solved. But there are many things that we want to have a connection. Like if you think about sports, robots or machines can run much faster than people.

for a very long time now. And we just had the Olympics and people are not very interested in seeing robots running against each other or against people. Because what really makes sports interesting in the end is the human weaknesses and the ability of humans to deal with their weaknesses.

And human athletes still have jobs, even though in many lines like running, you can have a machine run much faster than the world champion. I thought about this the other day. And another example is priests. Like one of the easiest jobs to automate is

is the priesthood of at least certain religions, because you just need to repeat the same texts and gestures again and again in specific situations. Like if you have a wedding ceremony, then, you know, the priest just needs to repeat the same words. And there you are, you're married. Now, we don't think about priests as being in danger of being replaced by robots.

Because what we want from a priest is not just the mechanical repetition of certain words and gestures. We think that only another frail flesh and blood human who knows what is pain and love and who can suffer, only they can connect us to the divine. So most people would not be interested in having the wedding conducted by a robot, even though technically it's very easy to do it.

Now, the big question, of course, what happens if AI gains consciousness? This is like the trillion dollar question of AI consciousness. Then all bets are off. But that's a different and very, very big discussion. I mean, whether it's possible, how would we know, and so forth. Do you think it's possible? We have no idea. I mean, we don't understand what consciousness is.

We don't know how it emerges in the organic brain. So we don't know if there is an essential connection between consciousness and organic biochemistry so that it can't arise in an inorganic silicon-based computer. There is a big confusion, first of all, should be said again, between consciousness and intelligence.

intelligence is the ability to reach goals and solve problems. Consciousness is the ability to feel things like pain and pleasure and love and hate. Humans and other animals, we solve problems through our feelings. Our feelings are not something on the side. They are a main method for how to deal with the world, how to solve problems. Now, so far, computers...

They solve problems in a completely different way than humans. Again, they are alien intelligence. They don't have any feelings. When they win a game of chess, they are not joyful. When they lose a game, they are not sad. They don't feel anything. Now, we don't know how organic brains produce these feelings of pain and pleasure and love and hate.

So this is why we don't know whether an inorganic structure based on silicon and not carbon, whether it will be able to generate such things or not. That's, I think, the biggest question in science. And so far, we have no answer. Isn't consciousness just like a hallucination? Isn't it just like an illusion that

I think I'm conscious because I've got a circuitry which tells me that I am effectively. It tells me through a bunch of like feelings and things that I'm conscious. Like, I think I'm looking at you now. I think I can see you. The feeling is real. I mean, even if we are all, it's like the matrix and we are all in. How do you know it's real? It's the only real thing in the world. I mean, there is nothing and everything else is just conjunction.

We only experience our own feelings, what we see, what we smell, what we touch. This we actually experience. This is real.

Then we have all these theories about why do I feel pain? Oh, it's because I stepped on a nail. There is such a thing in the world as a nail and whatever. It could be that we are all inside a big computer on the planet Zircon run by super intelligent mice. If I spoke to an AI, I could get an AI to tell me that it feels pain and sadness. That's a big problem because there is a huge incentive to train AIs to pretend to be alive.

to pretend to have feelings. And we see that there is a huge effort to produce such a... And in truth, because we don't understand consciousness, we don't have any proof even that other humans have feelings. I feel my own feelings, but I never feel your feelings. I only assume that you're also a conscious being. And society grants a status...

of a conscious entity to not only to humans, but also to some animals, not based on any scientific proof, but based on social convention. Like most people feel that their dogs are conscious, that their dogs can feel pain and pleasure and love and so forth. So society accepts, most societies, that dogs are sentient beings and they have some rights under the law.

Now, even if AI has no feelings, no consciousness, no sentience whatsoever, but it becomes very good at pretending to have feelings and convincing us that it has feelings, then this will become a social convention that people will feel that their AI friend is a conscious being and therefore should be granted rights.

And there is even already a legal path for how to do it, at least in the United States. You don't need to be a human being in order to be a legal person.

It's funny because you mentioned, you kind of alluded to the fact, jokingly, that we might just be in like a simulation. It was one of you like, well, maybe we're just in a simulation. Yeah, could be. And it's funny because in a world of AI, I think my belief in that as a possibility has only increased. This is in fact just a simulation because I've watched us go from when I was born, not really having internet access, to now being able to kind of speak to this alien on my computer that can now do things for me.

and having virtual reality experiences, which are sometimes quite indistinguishable, where I fall into the trap of believing that I am inside Squid Games because I've got this headset on, and you play it forward, and you play it forward, and you play it forward, and you imagine any rate of improvement, then I hear the arguments for simulation theory, and I go, do you know what? Probably, if you play this forward 100 years, at the rate we're on, the rate of trajectory we're on, then we will be able to create

information networks and organisms that don't in like a laboratory or in a computer that don't necessarily realize yeah they're in the computer especially with like what's going on with it's already happening to some extent you know these information bubbles that more and more people live inside them it's still not the whole physical world

But you get the same event and people on, say, different parts of the political spectrum, they just can't agree on anything. They live in their own matrices. And, you know, when the Internet came along for the first time, the main metaphor was the web, the worldwide web. A web is something that connects everything.

And now the main metaphor, which is this simulation theory, is representing this new metaphor. The new metaphor is the cocoon. It's a web that turns on you and encloses you from all sides so you can no longer see anything outside. And there could be other cocoons with other people in there and you have no way to get to them. Nothing that happens in the world

can connect you anymore because you're in different cocoons. You've only got to look at someone else's phone. Yeah.

You've only got to look at someone else's Twitter or X or Instagram feed. Is this the same reality? It is so different. Do you know what I was talking about over the weekend? My friend was to my left scrolling. He clicked on the discovery section, which is where you find new content. I looked down at his phone and was like, it's all Liverpool Football Club. It's like the entire feed is Liverpool. And my entire feed is completely different. And I was just thinking, wow, he lives in a completely different world to me.

Because he's a Liverpool fan, I'm a Manchester United fan. And to think about that, like to think that when you open your phone, and many of us are spending up to nine hours a day on our mobile phones, you're experiencing a completely different window into a completely different world than I am. And this was, you know, this is a very ancient fear. Because, for instance, Plato wrote exactly about that in the most famous parable, I think, from Greek philosophy.

is the allegory of the cave in which Plato imagines a theoretical scenario, an imaginary scenario of a group of prisoners chained inside a cave with their face to a blank wall in which shadows are being projected from behind them and they mistake the shadows for reality. And he was basically describing, you know, people in front of a screen, just mistaking the screen for reality.

And you have the same thing in ancient India with Buddhists and Hindu sages talking about Maya, which is the world of illusions. And the deep fear that maybe we are all trapped inside a world of illusions, that the most important thing that we think in the world, the wars we fight, we fight wars over illusions in our mind.

And this is now becoming technically possible. Like previously, it was these philosophical thought experiments. Now, part of what is interesting as a historian about the present era is that a lot of ancient philosophical problems and discussions are becoming technical issues. That, yes, you can suddenly realize Plato's cave in your phone.

So scary. I find it really scary because you're right, like,

I think right now, some people might say that they have some kind of grasp over the ranking system or why something shows up when I search it or whatever. But as these intelligence aliens become more and more powerful, of course we would have less understanding because we're handing over the decision making. In some industries, they are now completely the kingmakers. Like, I'm here on a book tour. I wrote Nexus, so I go from podcast to podcast, from TV station to TV station to talk about my book.

But the entities I'm really trying to impress are the algorithms. Because if I can get the attention of the algorithms, the humans will follow. Yuck. You know, that's how we are. We are basically kind of carbon creatures in a silicon world. I used to think we were in control, though.

And I feel like the silicon's in control. Control is shifting. We are still in control to some extent. We are still making the most important decisions, but not for long. And this is why we have to be very, very careful about the decision we make in the next few years, because in 10 years, in 20 years, it could be too late. By then, the algorithms will be making the most important decisions.

You talk about a couple of big dangers you see with the algorithms and AI and the sort of shift and disruption of information. One of them is this alignment problem, which how would you explain the alignment problem to me in a way that's simple to understand?

So the classical kind of example is a thought experiment invented by the philosopher Nick Bostrom in 2014, which sounds crazy, but, you know, bear with it. He imagines a super intelligent AI computer, which is bought by a paperclip factory.

And the paperclip manager tells the AI, your goal, the reason I bought you, your goal, your entire existence, you're here to produce as many paperclips as possible. That's your goal.

And then the AI conquers the entire world, kills all humans, and turns the entire planet into factories for producing paperclips. And it even begins to send expeditions to outer space to turn the entire galaxy into just paperclip production industry. And the point of the thought experiment is that the AI did exactly that.

what it was told. It did not rebel against the humans. It did exactly what the boss wanted. But of course, it was not the strategy it chose was not aligned with the real intentions, with the real interests of the human factory manager.

who just couldn't foresee that this will be the result. Now, this sounds like outlandish and ridiculous and crazy, but it already happened to some extent, and we talked about it. This is the whole problem with social media and user engagement. In the very same years that Nick Bostrom came up with this thought experiment in 2014,

The managers of Facebook and YouTube, they told their algorithms, your goal is to increase user engagement. And the algorithms of social media, they conquered the world and turned the whole world into user engagement, which was what they were told to do. We are now very, very engaged. And again, they discovered that the way to do it is with outrage and with fear and with conspiracy theories.

And this is the alignment problem. When Mark Zuckerberg told the Facebook algorithms, increase user engagement, he did not foresee and he did not wish that the result will be collapse of democracies, wave of conspiracy theories and fake news, hatred of minorities. He did not intend it. But this is what the algorithms did. Because there was a misalignment

between the way that the algorithm, the goal that was defined to the algorithm and the interests of human society and even of the human managers of the companies that are deployed these algorithms. And this is still a small scale disaster.

Because the social media algorithms that created all this social chaos over the last 10 years, they are very, very primitive AI. This is like the amoebas of if you think about the development of AI as an evolutionary process.

For this is still the amoeba stage. The amoeba being the very simple... The very simple life forms, the beginning, like the single cell life form. We are still in evolutionary terms, organic evolution. We are like billions of years before we will see the dinosaurs and the mammals or the humans. But digital evolution is billions of times faster than organic evolution. So the distance between an AI amoeba

And the AI dinosaurs could be covered in just a few decades. If chat GPT is the amoeba, how would the AI Tyrannosaurus rex would look like? And this is where the alignment problem becomes really disconcerting.

Because if so much damage was done by giving kind of the wrong goal to a primitive social media algorithm, what would be the results of giving a misaligned goal to a T-Rex AI in 20 or 30 years?

The issue at the heart of this is, you know, some people might think, OK, just give it a different goal. But when you're dealing with private companies who are listed on the stock market, there really is only one goal that keep that money. Exactly. Benefits of survival. So what do the platforms have to say? You know, the goal of this platform is to make more money and to get more attention. Because also it's mathematically easy.

And there is a huge, huge problem in how to define for AIs and algorithms the goal in a way they can understand. Now, the great thing about make money or increase user engagement is that it's very easy to measure it mathematically. One day you have a million hours being watched on YouTube, then next year, a year later, it's 2 million. Very easy for the algorithm to say, "Hey, I'm making progress."

But let's say that Facebook would have told its algorithm, increase user engagement in a way that doesn't undermine democracies. How do I measure that? Who knows what is the definition for the robustness of democracy? Nobody knows. So defining the goal for the algorithm as increase user engagement but don't harm democracy, almost impossible. This is why they go for the kind of easy approach.

goals which are the most dangerous. But even in that scenario, if I told, if I'm the owner of a social network and I say increase user engagement, but don't harm democracy, the problem I have is my competitor who leaves out the second part and just says increase user engagement is going to beat me because they're going to have more users, more eyeballs, more revenue. Advertisers are going to be happier. Then my company is going to falter. Investors are going to pull out. That's the question because there are two things to take into consideration.

First of all, you have governments. Governments can regulate and they can penalize a social media company that defines goals in a socially responsible way. Just as they penalize newspapers or TV stations or car companies that behave in an antisocial way. The other thing is that humans are not stupid and self-destructive people.

that if we would like to have better products in the sense of also socially better products. And I gave earlier the example with food diets. I think how much, yes, the food companies, they discovered that if they fill a product artificially with lots of fat and sugar and salt, people would like it. But people discovered that this is bad for their health.

So you now have, for instance, a huge market for diet products and people are becoming very aware of what they eat. The same thing can happen in the information market. The cost of those, like 70-80% of people in the US have chronic disease and are obese and life expectancy now looks like it's going the other way a little bit in the Western world. And it's, I don't know, I just feel like with policing...

Consumption of goods like alcohol, nicotine, food seems much more simple than policing information and the flow of information beyond, you know, beyond racism or like inciting violence. I don't know how you police. We already covered that the two most basic and powerful tools are to hold companies liable.

for the actions of their algorithms, not for the content that the users produce, but for the actions of the algorithms. I don't think we should penalize Twitter or Facebook. If somebody posts a racist post,

I would be very careful about penalizing Facebook for that, because then who decides what is racism and so forth? But if the algorithm of Facebook deliberately spreads some racist conspiracy theory, that's the algorithm. That's not human free speech. How do you know it's a racist conspiracy theory, though? Okay, so now we get to the difficult conversation, but this is something that we have the codes for.

And I would be very, very careful about having the courts judge on the content of the production of individual users. But when it comes to algorithms deliberately, routinely spreading a particular type of information, like a conspiracy theory, we can involve the courts. The key issue is who has liability.

That it's the company that is liable for what the algorithm is doing and not the human individual liable for what they are saying. And another key distinction here is between private and public. Like part of the problem is the erasure of the boundary between the two. I think that humans have a right to stupidity in private.

That in your private space with your friends and with your family, you have a right to stupidity. You can say stupid things. You can tell racist jokes. You can tell homophobic jokes. It's not good. It's not nice. But you're a human being. You're allowed to do that. But not in public.

Even for politicians, like as a gay person, if the prime minister tells a homophobic joke in private, I don't need to care about that. That's his or her business. But if they say it in public on television, that's a huge problem. Now, traditionally, it was very easy to distinguish private from public. You are in your private house with a group of friends. You say something stupid. That's private. It's nobody's business.

You go to the town square and you stand on a pedestal and you shout something to thousands of people, that's public. Here you can be punished if you say something racist or homophobic or outrageous. But it was easy for you to know. Now the problem is you go, let's say, on WhatsApp, you think you're just talking with two of your friends and you say something really stupid, and then it gets viral and it's all over the place.

And I don't have an easy solution for that. But one measure which is adopted by some governments is, for instance, that people who have a large following, they are held to a different standard than people who don't. Even the most basic thing of identifying yourself as a human being.

We don't want that everybody would have to get some certification from the government to talk with their friends on WhatsApp. But if you have 100,000 followers online, we need to know that you are not a bot, that you're actually a human being. And again, this is not covered by freedom of speech because bots don't have freedom of speech.

It's slippery slope, right? Because I've gone back and forth on this argument of anonymity and whether it's a good thing or a bad thing for social networks. And the rebuttal that I got when I lent to the side of IDing people is that like totalitarian governments

We'll use that as a way to basically punish the people who are speaking. The totalitarian governments are doing it whether we like it or not. Yeah. It's not a question that if the British do it, then the Russians will say, OK, so we'll also do it. The Russians are doing it anyway. Well, Americans start to do it. Well, they start to if someone speaks out against Trump and he has access to their identity and information, can he go look at them and get them arrested?

If we reach that point when the courts will allow such a thing, then we are in very deep trouble already. And what we should realize is that with the surveillance technology now in existence, a totalitarian government has so many ways to know who you are.

That's not the main issue. You talked about the platforms being responsible for the consequences. Yes. In the UK, over the last month, we've had...

I don't know if you've heard, but we had lots of riots. And I think it was all triggered originally when there was news that broke that someone had murdered some young children. And there was a confusion or sort of a misinformation around that person's religion. And that meant that people... That's an excellent example. Because, you know, if I personally, privately say...

to just two of my friends, I think the person who did it is X, I don't think you should be persecuted for that. I could say it in a private living room, and it's the same thing if I say it on WhatsApp or on Facebook. But if a Facebook algorithm picks up this piece of fake news and starts recommending it to more and more users...

then Facebook is liable for the action of its algorithms. You should be able to take it to court and say the algorithm deliberately recommended a piece of fake news. And again, if the fake news was produced by an influencer with a million followers, then he is also liable for that. But if a private individual in a private setting

said something which is not true. It's fake news. And then an algorithm deliberately spread it. The main fault is with the algorithm. And the people who should be in jail are the managers of the company that owns this algorithm and not the individual who uttered the words. Going back to the riots issue, let's say that, I don't know, the Guardian said,

on the day of the riots, decided to pick up a piece of this fake news and publish it on its front page. And they now take the editor of The Guardian to court. And he says, but I didn't write it. I just found this piece of fake news and decided to put it on the front page of The Guardian. Now, it would be obvious to us that the editor did something very, very, very wrong.

And he might or she might have to sit in jail. And it's not the problem of the person who originally produced the piece of fake news. If you're the editor of one of the biggest newspapers in the country and you decide to publish something on your front page, you had better be very, very sure that what you're publishing is the truth, especially if it can incite to violence. How would a social network owner know that? How would they be able to verify that everything is true?

At that scale? Not everything. But if, for instance, something is likely to lead to violence, it's a precautionary principle. First of all, do no harm. Again, I'm not asking Facebook to censor the piece of fake news. I'm only asking it, don't get your algorithms to spread it on purpose in order to get user engagement and make a lot of money.

If you're not sure about it, just don't spread it. It's as easy as that. How does it know it's fake news versus it thinking that it's actually really important, life-saving news? So for example... That's the responsibility of the company. Like, how does the editor of The Guardian know, or of the Financial Times, or of the Sunday Times? How do they know if something is true and if something should be published on the front page?

If you are now managing a social media company, you are managing one of the most powerful newspapers in the world.

And you should have the same kind of responsibilities and the same kind of expertise. If you have no idea how to judge whether an algorithm should recommend something to millions of people, you're in the wrong business. You know, if you can't stand the heat, get out of the kitchen. Don't run a social media company if you don't know what should be shown to millions of people. It's very pertinent because obviously Mark Zuckerberg's letter that he wrote this week says that

I was approached by the FBI who told me that Russia were trying to influence the elections. And they were given some information that there was this laptop story, Joe Biden, Hunter Biden, who's Joe Biden's son, had this laptop story, which Facebook didn't know if it was real or not. And they thought maybe it was a Russian plant. Russia had put the story there to try and make sure Joe Biden didn't win the elections. So Facebook said,

deprioritized it stopped it going viral and suppressed it turns out it was a real story and it wasn't fake and mark zuckerberg says he regrets suppressing it because it was in fact a real real story and suppressing it he kind of influenced the election to some degree um

So it's so complicated to the point that I just can't... It's complicated to run a big media company. It's complicated to run the Wall Street Journal or Fox News. And then what happens if the FBI comes to Fox News or comes to the Wall Street Journal and tells them, look, there is this story planted by the Russians. Don't encourage it. And later on, it turns out that it was wrong. Yeah.

Could happen. And as the manager of the Wall Street Journal, you need to deal with it. And do I trust the FBI? Under what conditions? Sometimes I should. Sometimes I should be suspicious. I feel like you're going to end up in jail. If you're the editor of the Wall Street Journal, you're going to end up in jail either way, because either way you're influencing elections. But that's the business. I mean, the real problem is when you have extremely powerful people

like Zuckerberg or Elon Musk, that pretend that they don't have power, that they don't have influence, that they don't shape elections. We know for centuries that the owners and editors of newspapers, they shape elections. And therefore we hold them to certain standards. And now the owners and managers of platforms like Twitter and YouTube and Facebook, they have more power

than the New York Times or the Guardian or the Wall Street Journal. And they should be held to at least the same degree of accountability. And their shtick that, oh, we are just a platform. We just allow everybody to publish what they want. It doesn't work like that. And we don't accept it with traditional media.

So why should we accept it? That's the whole trick of these tech companies. That again, we have thousands of years of history and they tell us, oh, it doesn't apply to us.

Like if you have a traditional industry like cars, it's obvious to everybody, you cannot put a new car on the road unless you made some safety checks to make sure the car is safe. You cannot put a new medicine on the market or a new vaccine on the market without safety. That's obvious, right? But when it comes to algorithms, no, no, no, no, no. That's a different set of rules. You can put any algorithm you want on the market. You don't need any safety rules.

And even more basic than that, you think about something like theft. You have the Ten Commandments. Don't steal. And, you know, people know, yes, you shouldn't steal until it comes to information. Ah, no, no, no. It doesn't apply to information. I can take your information and without your permission, do all kinds of things with it and sell it to third parties. And this is not stealing. Don't steal doesn't apply to my line of business.

And this is what the tech giants have been doing in many cases over the last decade or two, telling us that history doesn't apply to them, that all the wisdom that humanity gained in a very painful way

over centuries and thousands of years of dealing with dictatorships and with whatever, it doesn't apply to the new technology. And it does. It does apply. Do you ever feel tempted to just log off and just like go live in a field somewhere, maybe like a desert, maybe just create a little bit of a cult? I do it every year. Oh, really? Yeah. I take a long meditation retreat of between 30 days and 60 days.

Like this year, I plan in December after the book tour is over to go 60 days from a meditation retreat in India and just completely disconnect. No smartphone, no internet, not even books or writing paper. Just information fast. Why? It's good for the mind. Again, like with food, too much in isn't good for us. We need time to digest and to detoxify.

And it's true of the mind as well. If you just keep bombarding it with more, you get addicted to the wrong things. You develop bad habits. And you need, or at least I need, time off in order to really kind of digest everything that happened.

And to decide what I want and what I don't want, what kind of habits, addictions I should try to be rid of. And also to, you know, to get to know my own mind. When the mind is constantly bombarded by information from outside, it's so noisy, you cannot get to know it because there is so much noise.

But when the noise goes away, then you can start to understand what is the mind? How does it function? How does it work? Where do thoughts come from? What is fear? What is anger? When you're boiling with anger because of something you now read, you are focused on the object of your anger, but you can't understand the anger itself. The anger controls you. When you...

have an information fast. You can just observe what happens to me when I'm angry. What happens to my mind, to my body? How does it control me? And this is more important than any angry story in the world.

to understand what anger actually is. It's very, very difficult. I mean, how many times do people stop and just, you know, try to get to know their anger? And not the object of the anger. This is what we do all the time. We kind of replay. We heard something terrible that a politician we don't like, like, I don't know, somebody's angry about Trump. So he would replay it again. And like, oh, he said like this, he did like that, he will do this, he will do that. And you don't get to know your anger that way.

I have about 50 different companies in my portfolio at Flight Group now, some of which I've invested in and some of which I've co-founded or founded myself. One thing I've noticed is that most companies don't put enough effort into the hiring process. In my mind, the first and most critical thing in business is assembling your group of people because the definition of the word company is group of people. And throughout all of my companies, whenever I'm looking to hire someone, my first port of call is LinkedIn Jobs, who I'm happy to say are also a sponsor of this podcast.

They've helped us source professionals who we truly can't find anywhere else, even those who aren't actively searching for a new job, but who might be open to a perfect role. In fact, over 70% of LinkedIn users don't visit other leading job sites. So if you're not looking on LinkedIn, you're probably looking in the wrong place. So today, I'm giving the Diary of a CEO community a free LinkedIn job post. Head to linkedin.com slash DOAC now and let me know how you get on. Terms and conditions apply.

So interesting. I was playing out the scenarios in my head as you were speaking of this future where there's almost these two species of human. You have one species of human who are connected to the information highway through the internet, through the neural link in their brain. That's just like they're hooked. And the algorithm is feeding them information and they're acting upon it and they're feeding it. And then you have this other group of people who decided to reject that, who didn't get the neural link, who aren't trying to interface with AI. Yeah.

and that are living in a tribe in some jungle somewhere. And I like... My girlfriend said this to me many years ago. She's going, I think there's going to be a split. Yeah. And I kind of like, you know, whatever. But now I'm like, I can see why as things get more extreme, you just go, you know what? I'm going to make a decision here. And especially when I saw the Neuralink that Elon Musk's working on that allows you to control computers with your brain. And the computer to control your brain also. Yeah. You're right. I actually didn't think about that. But I just imagined...

and this is a question for everyone listening, if there's you and me and I have the chip in my brain that now humans have in their brain that they're using to control computers with,

I am a different species to you because I can control the, I can control my car downstairs. I can control the lights in this room. I can, I can ask my brain questions and get the answers. My IQ becomes 5,000. Yours is still 150 or 200. Yours is probably 250, but I'm a different species to you. I have such a huge competitive advantage over you that if you don't get the chip, um,

Then you're screwed. That speciation. Yeah. Again, on a small scale, we saw it before in history. There are the people who adopted the written documents and the people who rejected it. And they are not with us anymore because the people who adopted the written document, they built these kingdoms and empires and they conquered everybody else.

And we are in danger of the same thing happening. And this is not a good thing because it's not like life was better for the people with the documents. In many cases, life was better for the hunter-gatherers who lived before. So what's the solution? If I had to, you know, having read your book, brilliant book, Nexus, A Brief History of Information Networks from the Stone Age to AI, what is the solution?

How do we stop the alignment problems, us all becoming paperclips, the social chaos, the misinformation, the silicon curtain, as you talk about in the book? How do we stop these things destroying our world? Is there hope? Are you optimistic? The key is cooperation, is connection between humans. I mean, the humans are still more powerful than the A.I.'s.

The problem is that we are divided against each other and the algorithms unintentionally are increasing the divide. And then this is the oldest rule of every empire is divide and rule. This was the rule of the Romans, of the British Empire. If you want to rule a place, you divide the people of that place against one another and then it's easy to manipulate and control them.

This is now happening to the entire human species with AI, that just as we had kind of, you know, the iron curtain in the Cold War, now we have the silicon curtain, dividing not just China from the US, but also Democrats from Republicans, also one person from another person, and all of us from the AIs, which increasingly make the decisions about all that.

We still have the power for, I don't know, five years, 10 years, 20 years to make sure it doesn't go in dystopian direction. But for that, we need to cooperate. Are you optimistic? I try to be a realist. I mean, I just came from Israel and I saw a country destroying itself for no good reason whatsoever. It's a country that just pressed the self-destruct button and for no good reason.

And it can happen on a global scale. What do you mean impress the self-disrupted? It's not just the war between Israelis and Palestinians, but Israeli society turning against itself. Greater and greater division and animosity. And it's like a dark hole.

of anger and of violence, which is sucking more and more people in, you know, all over the world, you now feel the shockwaves from this dark hole in the Middle East. And there is no good reason. There is no objective reason. If I say something about the Israeli-Palestinian conflict, there is no objective reason for it.

It's not like there is not enough land between the Mediterranean and the Jordan River that people have to fight for the little land there is, or that there is not enough food. There is enough food for everyone to eat. There is enough land to build houses and hospitals and schools for everyone. Why do people fight? Because of different stories in their minds. They have these different mythologies that God gave this whole place just to us. You have no right to be here. And they fight over that.

And this is a local or regional tragedy. It can happen on a global scale. And if something ultimately destroys us, it will be our own delusions, not the AIs. The AIs, they get their opening because of our weaknesses, because of our delusions.

Yuval, thank you so much for writing a book. I think this book is one of the most well-timed books that I've ever come across because of everything that's happening in the world right now. And it really helped me to understand that the problem isn't necessarily me versus you if you're on the other side of the aisle. The problem is information, the networks of information that we consume, who's controlling those networks of information, right?

Somebody is manipulating us to be on different sides, not just to be on different sides, but to see each other as enemies. And right now that's a person, but it might not be. Soon it might not be a person, no. And understanding that I think helps us focus on the root cause of issues that are sometimes hard to identify. I think the problem is my neighbor. I think it's that person with different color skin. But actually, if you look one level deeper, it's the information networks and what I'm being exposed to that are

brainwashing me and creating those stories. And as you talk about in your previous book, stories are ultimately what are running the world. And it's this wonderful, the Nexus is just a wonderful book at a wonderful time that helps us to access this knowledge of the power of information and how it impacts democracy and relationships and society and business and everything in between in a way that I hope will

lead to action. And I think that is something to be optimistic about. Yeah, and ultimately, I think most humans are good. They're good people. When you give people bad information, they made bad decisions. The problem is not with the humans. It's with the information. Amen. Yuval, we have a closing tradition on this podcast where the last guest leaves a question to the next guest, not knowing who they're going to be leaving it for. Oh, okay. And the question left for you is...

What does it mean to be strong? To accept reality as it is. To deal with reality without trying to hide it, disappear it, put a veil over it. So interesting. I think you're right. I think you're right. Certainly not the answer I would have given. But, you know, you come to... What would you say? Oh, what would I say?

I guess I probably would have spoken to like perseverance in the face of a lot of different difficulties. And one of those is information. But it's just that idea of like persevering towards whatever your subjective goal is in the face of and in spite of a variety of different difficulties. Maybe that strength.

So that could be raising a kid or it could be going to the gym or whatever. But I like your definition as well because I think it's much more important in the times we find ourselves in. And honestly, as a podcaster, you sometimes feel like you're caught right in the middle of it because I think everyone's trying to figure out if I'm like on the right wing, on the left wing, if I believe this, if I endorse every guest that I sit with and you almost have to try and remain impartial. But it's very, very difficult to...

For people to understand that. Because they want you to fit somewhere. Because that's weakness. I mean, you have a lot of people who claim to be very strong, who admire strength as a value, but they can't deal with parts of reality that don't fit.

Into their worldview or their desire. Yeah. And they think that strength is I have the strength to just make these parts of reality disappear. Yeah. And no, this is weakness. And I am sorry for going back to that, but this is also the war. Like what is war is trying to disappear a part of reality that you don't like. In this case, an entire people.

I don't like these people. I don't think they should be in reality. So I try to make them disappear. And people say, oh, he's a very strong leader. He's not. He's a very weak leader. That a strong leader would be able to acknowledge, no, these people exist. They are part of reality. Let's now find out how do we live with them. Amen. Your book, Nexus.

A Brief History of Information Networks from the Stone Age to AI is a must read for everybody that listens to this podcast and has any interest in these subjects at all. It's endorsed by two of my favorite people, Mustafa Suleiman, but also Stephen Fry and Rory Stewart, who's a great person as well. And it's endorsed for a very good reason because it's a completely mind expanding book.

written from someone who only writes exceptional culture-shifting books. So I'm going to link it below. I highly recommend anybody that's listened to this conversation and that's interested in this subject matter to go and get this book right now. It's available right now for pre-order and then it's shipping in five days from now when it releases. So be the first to read it and hopefully be the first to understand and action some of the things that you learn in this book. Yuval, thank you so much for your time. Thank you. Thank you.

Isn't this cool? Every single conversation I have here on the Diary of a CEO, at the very end of it, you'll know, I asked the guest to leave a question in the Diary of a CEO. And what we've done is we've turned every single question written in the Diary of a CEO into these conversation cards that you can play at home.

So you've got every guest we've ever had, their question, and on the back of it, if you scan that QR code, you get to watch the person who answered that question. We're finally revealing all of the questions and

the people that answered the question. The brand new version two updated conversation cards are out right now at theconversationcards.com. They've sold out twice instantaneously. So if you are interested in getting hold of some limited edition conversation cards, I really, really recommend acting quickly.