By the way, in case you haven't heard, my brand new book, Feel Good Productivity, is now out. It is available everywhere books are sold. And it's actually hit the New York Times and also the Sunday Times bestseller list. So thank you to everyone who's already got a copy of the book. If you've read the book already, I would love a review on Amazon. And if you haven't yet checked it out, you may like to check it out. It's available in physical format and also ebook and also audiobook everywhere books are sold.
We've always had this idea that we could have an artificial being that would resemble a human being. It was the conceptual idea that we need to program computers that can make decisions like a human would make decisions. This interview is with Kenneth Kukie. Now, Kenneth is a journalist, a writer, and the deputy executive editor for The Economist. He's written the phenomenally best-selling book Big Data, which has sold over 2 million copies, which is all about the role of
big data in the world. And he was actually one of the people who kind of popularized the term back in like 2010. And his latest book, Framers, looks at the ways in which humans are able to uniquely reason and understand things in a way that AI can't. GPT is not giving you the answer. Remember, it's an inference. GPT is giving you what it thinks a correct answer looks like.
In this conversation, we talk about the idea of big data, and then we talk about artificial intelligence and how the field developed over the last like 70 years. We talk about some of the risks that we actually face from AI and what we can do about it, like what sort of world we want to live in and how we can use AI as almost a co-pilot for various different things rather than relying on it as being like the sole arbiter for truth.
Absolutely. It's going to take our jobs. Not everyone's job. I think there's always going to be work to do, whether that's well-paid work, whether that's satisfying work. It's up to us to create. Work is definitely going to change. Everyone's work is going to change. But that's happened with electricity. That happened with computers. We need to reimagine what will bring value to the world and then do that.
All right, Ken, welcome to the show. To start with, I'd love to hear a little bit about your background and how you ended up getting interested and involved in AI and these sort of big data topics that you talk about. What a good topic. What a great place to begin. And thank you for having me on the show. So
I actually sort of remember the moment where I had a little epiphany, a little aha, which was in Cambridge, Massachusetts, walking down the street and realizing that all of the world could be rendered as information. You could put on a lens, if you will, sort of glasses, and then see absolutely everything through the lens.
a different way of seeing it. A lawyer would see absolutely, an architect would see everything, every right angle and would see every plane. A lawyer would see every product liability and would see every opportunity, every tort to sue. But a physicist or maybe even just a inquisitive person in the beginning at the outset, this was around the year 2002, of the Information Society could look at it and sort of render absolutely everything in a datified format as information.
You would look at every building and see the height of it. You'd feel the wind and understand that there'd be temperature and a wind speed and a direction to that wind. You understand that you have a pulse and therefore it could be measured as a heartbeat and hemoglobin levels and blood, you know, sugar levels.
It's all very peculiar to think that you would be able to measure absolutely everything in the world, but this is an insight that I then realized many people have had for centuries before, whether it's Isaac Asimov in science fiction like Foundation, whether it was
I forgot now his first name. I think it was Joseph Kietley. No, Alphonse Kietley, who was a Belgian sort of physiocrat, physician, stroke sociologist, who then created this idea of the low moyenne, the average man, and wrote what he called the social physics, which was the physique sociale was the idea of...
enumerating absolutely everything in order to understand it. And there was even Gaunt and Halley in Britain in the 1600s who were able to take a look at astronomy and realize that you could measure the night sky, but then during daylight would look at contemporary Britain and realize you can identify how many people are going to die in a given year. You don't know who, you don't know where, you don't know when,
But you can get a rough estimation with a known margin of error of the annual deaths of a population. And you can then also sell insurance based on that, right? You can understand the number of fires in London, for example. The origin of insurance was just that, enumerating things.
So I was struck by that epiphany and realized that in the information society, as we did more of our activities intermediated with a computer online in digital format, we, as a default, collected information. It was just a byproduct. And we could suddenly start using that information. I took that insight, like so many of us do, I took that insight personally.
Socked it away, never thought about it again, forgot about it for like a decade. And then lo and behold, one day I was the Tokyo correspondent of The Economist and the call went out for cover story, cover ideas. And with this glorious like 15-page spread that correspondents get to do and a quasi-unlimited experience.
expense budget. This was before the financial crisis. And I realized this is the moment to do a story on the information of the information society, collecting information. I'd heard of AI, but at the time, no one believed it worked. It was sort of poo-pooed in the academy. I had heard of machine learning, had no idea really what it meant.
But I plowed into this around the year 2009. So it's early. And at the very end of my searches and blowing a lot of money for The Economist, I basically had nothing. And crying into my soup in Silicon Valley one day at San Francisco, the night before I left with a friend, he interrupts me and he says, you know what I do, don't I? Don't you? And I said, you're some sort of internet entrepreneur. He's like, I have a big data company. I said, big data? What's that?
Therein began a series of introductions, a cover story in The Economist in the fall, except in February of 2010 on big data, introducing the term to the sort of mainstream audience. Oh, really? Is that how the term got introduced? Oh, right. I did not coin the term, but the media, the general media, traditional media hadn't been reporting on it. And I think one of the earliest instances
instances of the term big data used to describe machine learning as a shorthand, but also this idea that the amount of information had grown to such a degree that we needed to use new techniques and new infrastructure to process it. If you will, there's a there there. Big data actually means something specific because it had to describe something that was going on beforehand under a different set of constraints and different set of mechanisms and processes and people and talents.
This new technique of big data, it was sort of inaugurated in that in 2010 on the page of The Economist in a cover story. And the reception to that was so enormous that from that moment on, I realized I'm in the data business. Damn.
Yeah, because I remember, like, I think I first came across the term in like 20, I want to say 2011, 2012. Sure. This was when I was getting into medical school. And, you know, researching, like, what could they ask me at interview? And at the time, there was a little bit of a sort of
rumblings in the medical world around this idea of big data and how it applies to patient data and enormous blood test results that we have going back decades and how to get insights from that. Because I was interested in technology, I was like, "Oh, maybe this could be an angle that I talk about." Thankfully, they never asked me any questions about it because I really wasn't prepared. Well, you didn't do so bad. You got in anyway. But it was around that time, it was around 2011, I was writing the book Big Data. 2012, it came out, so it's about a decade old.
It's still not there, which is sort of interesting. The intellectual underpinnings of using data in society to improve society is sort of understood, but to actually make social change takes a long time, takes decades. In the case of medicine, we're still not using information. I mean, we use it in a pointillist way, like an internist will walk by the
the bedside of a person in the hospital would pick up a clipboard or the digital equivalent of it and will look at the numbers and make a decision on it. And that's ridiculous and obscene because really what you want to do is to have every single person who's had that exact same condition and meets the same criteria of the patient going back a decade. And you would then have a sort of a co-pilot, if you will, as the term that we now use in terms of using AI to support human beings.
would to then make its own estimation and you might want to blind the two and see if the two decisions in terms of the next therapy and next diagnostic is the same in which you're confirming it or then to if you have a difference add a third person a minority report if you will a third person to make a decision on it to vote in one direction or the other and then to use the feedback of what happened was it the right diagnostic or not and of course it's never white or black so it's a
It's a spectrum of right-ish or wrong-ish. What are the alternatives? You don't know the counterfactual. And then to feed that back in to the next person. Google does this for search. So if everyone starts clicking on the second search result for the same search term, or better yet, if everyone's clicking on the eighth search result for a given search term, Google, the algorithm knows to move it up to the first or second place because it's clearly the most popular. Each click is a vote. So why don't we do that in healthcare?
It's crazy. And again, it's obscene because all it takes is the will and a bit of dosh and we improve patient outcomes and we improve society and we lower our costs. Yeah. Yeah. I think there's so much here where like whenever anyone I know, for me as well, whenever we start working in medicine and we see the amount of data that is available and then see the fact that it's all, you know, in the UK, some of it is still in pen and paper system. Some of it you have to use
four different apps to access like your blood tests and then the sort of the vital signs of the patient and another one for your scans and another one for the drug stuff. And it's like, there are very few systems that integrate all the things. And the ones that do, like I was working in Cambridge, Attenbrook's hospital that did integrate all the things. And it was incredible. I could run my whole night shift from like the cafe Burger King downstairs, because they always have Burger Kings in hospital for some reason. And just like on the iPad, because they had an iPad app, it was, you know, someone'd ring me about a patient, I'd be able to immediately see all the things.
It still wasn't co-piloty in that it wasn't telling me what to do next, but even just the basics of just having all the information in one place was a really good starting point. Yeah. Well, keep in mind, we might not want it to be a full co-pilot system, particularly for you at the outset of your career. We wanted you to make decisions. In fact, we sort of sadly want you to make mistakes because if you make mistakes when you're young, you probably won't when you're old.
What we would like to do is maybe validate your decision or have you reconsider your decision, right? So if you think that this person has gallstones and in fact it turns out it's septus, right, and anyone could see it a mile away and your supervisor would but it's
four in the morning, right? We want the AI system, or in this case, the big data system, to simply prod you and say, have you considered septus? Right? And would you say, ah, yeah, maybe. Maybe that's right. And you can reconsider. It might want to tell you its confidence, its degree of confidence in its diagnosis, in which it says that actually we think it's different, but it's really only with a 60% likelihood.
or a degree of confidence in this instance where you're quite sure that it's actually not this for this other intervening reason. The toxicity of the patient would kill him if you were to give that particular drug, for example. Those are the ways in which we would use data effectively, but it requires two things. One, probably a regulatory change. We need to sort of loosen privacy rules in certain wise ways that are still governed under very strict law, but loosened.
for this social goal. Privacy law never presumed a world of statistics and presumed commercial rapaciousness. It doesn't really accept the idea that with a large body, the big data message, which is, or if you will, leitmotif,
With a large body of data, you can do things that you inherently can't do with a smaller amount of it. It's not about the point list sliver of information, but the agglomeration of a large body of data, asking new questions, answering them in more complex
deeper and more accurate ways than we ever could and granular ways than we ever could imagine. That's the first. The bigger problem is a change of mindset. We need, if you will, a data duty of care mindset. And that is to say, we can't accept a world in which
We're going to make human decisions and we're going to try to validate those decisions with a smidgen of information that confirms what we think we're looking at. We reframe it. We reverse the process, if you will, and say that it's not a nice to have, it's a thou must have. You have a legal and moral responsibility to apply data to questions that we can use the data to help us answer. And if you don't,
you're negligent and on the hawk. So it's not like a doctor says, here's my decision and here's how I justified it with the data. But it says,
Here's my decision, and if I didn't use data to confirm it and to validate it, I have not performed my duty of care to my patient, and I go to jail, or I'm on the hawk for liability. If we had a world like that, you would never have an environment in which patients go in and hope the best for whoever is making a bleary-eyed decision at four in the morning, because you're always going to have this sort of support mechanism of...
Every single data point the NHS has collected over the last decade for every patient that looks like mine. That would be great if we could move to that world. So the book, Big Data, it's sold like 2 million copies now? Yeah, something like that. An absurdly large number for like that's...
insanely successful for a nonfiction book for people who might be listening who are not familiar with like sales numbers for things. I'm looking a lot at sales numbers now because I'm like, oh shit, like, you know, when you're in that world, you realize what that like for a book to sell 2 million copies, that's like absolutely phenomenal. So that aside, like what is the book Big Data about?
So the book, Big Data, we have to remember... I always knew when I was writing it that within about 10 or 15, 20 years, its success would be such that no one would understand what we were talking about at that future date. Because they wouldn't really...
believe that the world could have been such a medieval place that people didn't use data based on what they're doing. I mean, a classic example is marketing. Like all of marketing now applies information to it. When you send an email, who you send it, how you segment it, right? In the past, we used to have segmentations of potential customers, but Amazon came in and let's think about the segments that the Amazon homepage uses, right? Do they segment it by country?
Do they segment it by male, female? Does the homepage of Amazon.com segment the consumer by time of day or the IP address, whether it's the central business district or a residential district? No. These questions are preposterous. There is no Amazon homepage. If there's 1.5 billion Amazon customers, there's 1.5 different homepages. Now think about the app of the New York Times or The Economist. Yeah.
That's still segmented, right? In fact, that's probably just done on a geography-based with a little bit of items that are recommended based on previous consumption behaviors, right? So there's a ways to go. The book Big Data was taking that original insight that all of the world can be sort of seen through the lens of information. And if we could collect information and process it wisely, right?
emphasize the word wisely, we can solve a lot of problems that we couldn't before because we'll be asking different questions while having better answers, i.e. more accurate answers and more granular answers to the problems that we face and that we have a moral duty to apply information to our problems, to use human data being empirical evidence of the world in order to improve the world.
And it was in response to a world in which we weren't doing any of those things in most domains. Underneath that is the concept that the...
That, as I said before, there are things you can do with a large body of information that you fundamentally cannot do with a smaller amount of it. That the change in scale leads to a change in state. Or that the quantitative shift leads to a qualitative shift. That more isn't just more. That more is different. What's a practical example of this, just to make it...
more concrete in people's minds. Sure. So, I mean, let's maybe think first in search and then go right into medicine. So if you wanted to understand the way that spam filters would work, I mean, for your younger audiences, I should probably explain what spam is. Sit down, my children. At the outset of the internet, emails were free. Keep in mind before then, postcards were expensive, letters were expensive, and calls were charged by the minute.
This is about 25 years ago, so it's within some of their memories. So email was free and as a result spam used to clutter up our inboxes. Of course we still get that today, but we never even see it and the reason why is our spam filters are so incredibly effective. And the reason why is we no longer have spam filters based on human beings making decisions on whether a given piece of item is spam or not, but we have machines do it and how they do it is through machine learning.
What we do is we have an algorithm that looks at the known traits of spam, but then has to start from somewhere, right? Viagra, extensions of one's, you know, of one's...
bodily parts. But then what it does is it accepts in a Bayesian statistic way. It's making an inference with a lot of blurriness on the sides to say that maybe Viagra isn't spelled V-I-A-G-R-A, but V-I-A-G-R-A.
Roman numeral one, or actually Arabic numeral one, the A might be four, G, R, four. Every human will read it as Viagra. Now, how would you know that that would be, like, how do you design a system that
that a program for a computer to follow a set of instructions to say that that is Viagra? And the answer is that you really wouldn't. It'd be hard to get every single permutation of how to misspell Viagra that would be understandable to a human, but a machine wouldn't be able to find out in the hard and fast code. So instead you use machine learning to make an inference
And when you see a lot of these things coming through, you then score it. And you see one or two people maybe put into their spam folders on Gmail. Suddenly, you do that across the board, and you stop hundreds of millions of items of spam coming through. And that's what happens. So we have a machine learning algorithm making inferences.
take that exact same approach where you don't know the answer, but you design a system that's going to make an inference based on a large body of data. And if you don't have a large body of data, you can't do this with a small body of data. You wouldn't have enough information. And you're going to, say, spot the telltale traits of cancer or
years before the cancer actually forms. So it's not yet cancer, but it's the cells that are growing in such a way that it's probably going to become cancerous. Again, you need a large body of data and when you have that, you then you train on the data that you know
And then you look at things that you don't know and you then make a prediction. That's the big data story. So you take that same approach and actually the same engineers who worked on a spam filter at Google will publish a paper on how to read an eye scan, a retina scan, and identify who's going to develop cancer. And this is actually now...
Relatively simple. I mean, there has been now 10 years of academic papers on spotting, you know, on diagnosing medical problems based on large bodies of information, often information that's not related to medicine itself, but is a proxy for the condition that people are feeling. Yeah.
So you were saying that figuring out where the AI co-pilot fits into the clinical workflow is the hard part because now all of a sudden you have all of the human issues of like, "Okay, cool. Are we using the model to categorize the backlog of tens of thousands of chest X-rays that we have? Okay, if that's the case, does it need to then be vetted by a second opinion human?"
Okay. If not, then like, are we like, how sensitive or specific is the thing going to be? And in theory, yes, a data, a model could identify, I don't know, the fact that you've got consolidation on the chest x-ray or whatever, but like what,
What does that practically mean for the healthcare system is the thing that we've not yet figured out? I know, and these are great problems to have. I mean, we're frankly blessed that we can actually even think through these problems. So let's get started. Let's figure out, well, what should the workflow look like? What does the consumer expect? What does the consumer expect under what conditions? So for example, if it's cancer and it's lung cancer, where getting an accurate diagnostic is...
is very essential, but it doesn't take particularly too long, and a delay of a day of 24 hours isn't going to mean life or death, then that's going to change the workflow and the processes in which we apply it. But if we're trying to understand whether we need to operate on a person's brain because there's a hemorrhage going on inside the skull, and we can get an inaccurate
And we know how it's going to be. We can quantify the degree of inaccuracy. Let's say it's going to be 15 or 20%, one in five, it's going to be misdiagnosed in either direction.
But we get an answer in four seconds and not 40 minutes, i.e. we're saving 50% of people's lives. Let's go with that. I mean, that's a no-brainer in an emergency situation. And what about if it's not at a large metropolitan hospital in London in which you get a world's foremost expert looking at it, but it's somewhere in the hinterland? In Niger, they would rather have any diagnosis than no diagnosis at all.
That's just for our today issue because of course as this becomes democratized, these tools, their standards are going to increase and their talents are going to increase. Suddenly, I can imagine a day in which some person in a deep dark place in the world has an appendix operation and it needs to go and go out now to save the life of the person. You're going to see something that looks like one of those car
cargo containers, right, being transported by helicopter and either dropped down maybe by parachuting down because it's made of aluminum and it's light and out it's going to become a totally sterile surgical robot that's going to first - and surgical doctor if you will, medical doctor first performing, confirming the diagnosis that this is indeed what's required and then inside a plastic tent in totally sterile conditions operating on the person and saving the person's life.
Is that going to happen in my own lifetime? I'm a healthy lad. Let's find out. But let's get started. Let's have these conversations right now. Nice. So one of the things that you've been researching a lot more recently is artificial intelligence. And I wonder for people who are listening to this, let's imagine someone, everyone's heard of AI at this point. We've all heard of chat GPT.
but I think going a level deeper and understanding like, what is it? Like, how did it get, like, how did it develop over time? And why is it that some people have been talking about AI for like,
10 plus years. Others have just heard about it last year when Chad GPT got big. So I wonder, can we start with what is artificial intelligence? Let's start at the beginning. Even in Aristotle and others, in the Greek mythology, there were examples of the idea of the robot human that plays the lyre and tills the fields. So there's always been this
beautiful anthropomorphizing and quest for the other, which could either be the mechanical human that alleviates our toil, and in fact the term robot comes from roboto, which I think means toil, in Czech. You can check me, and I believe it was, come on, who, who, who? It was R.U.R., Rostrum's Universal Robots, a play by Carl Rostrum.
Capitch is his name, from about 1920. Right now, all of your listeners are pausing and going on Google to find me out. I'm a fraud. But I'm close enough. Or also the idea of the golem or the Frankenstein monster. We not only anthropomorphize, but we create our evils and embed it in it as well. So we've always had this...
idea that we could have an artificial being that would resemble a human being. But it was Alan Turing who sort of gave us our initial construction of it. First, a paper on computable numbers, identifying that there could be a universal computing machine, which I fully don't understand. It would take too long to truly explain.
And that was around 1935 or so. And then around 19, I want to say about 1950, he writes the paper in the Psychology Journal, interesting that, Mind, on the imitation game, in which he talks about this, introduces the Turing test, later to be called the Turing test.
There, it was the conceptual idea that we need to program computers that can make decisions like a human would make decisions. So suddenly it asked the question, how do humans make decisions? And so there was a whole movement around from the 1950s to the 1990s to take the ways that humans make decisions and enshrine that into computers. That's through the hardware and more importantly through the software.
And the hardware is pretty simple. In fact, we use our computer hardware analogous to the human mind in which we have memory and we have a processor, right? Something that stores information and something that uses information to come up with an input and then an output and an answer. And the original computers were made on the model of the human mind.
But then we also had to create a system in which we were going to render into software the ways in which humans made decisions. So we think sequentially and we think rationally. So if we create a long list, and the original programming language of AI was called Lisp, for list programming, we could simply write down all the rules that we use as heuristics in terms of making decisions to get our output.
That seemed logical. It was deeply, deeply flawed. And the reason why is humans don't think sequentially. We're a jumble of ideas all at once. Do we think rationally? Well, we've spent the last 25 years in cognitive psychology and neuroscience to look at all the ways that we don't think rationally, that we actually embed cognitive biases through everything that we do. So that showed promise and it got us so far but no further.
The early gains became like fool's gold. We thought we were getting closer and closer, but then it became harder and harder and harder to wreak out any benefit of it. So what happened during this period? And then we had these ideas called the AI winter. This was in the 70s, 80s, and then later in the 90s, there was one, two, or three, depending on how you count these. But the broader point is that these winters were when government funding and industry funding
shriveled up for AI. And about the year 2000, 2005, if you said you worked on AI in academia, you got laughed at and couldn't get a tenure and couldn't get a post because it just seemed like, again, it was a fool's quest. Sorry, question on the front. So a computer working out that two plus two equals four, that's not AI.
Great question. Let's pause my discourse to say I'm going to introduce this concept of machine learning and what's really a shorthand for statistical machine learning and changing the method to go into that. So 2 plus 2 equals 4 is a truth. It's almost like a Euclidean sort of foundation, right? It's basic mathematics.
arithmetic, the origin of mathematics. We should also say mathematics is a little bit trickier still because at the outset it took several centuries before we have millennia before we have the concept of zero. So we started with one. We needed a zero and discovery was important. But that's simply to say that we needed imagination and a conceptual shift to understand mathematics. But we have a one and we have a two. We have a two plus two equals four.
If we can enshrine that into a computer that can generate that same answer all the time, and then can actually conceptually have an idea that anything with one added to it becomes one more sequential, that would be AI. By definition, it's intelligent, or however we want to define intelligence. We could talk forever about it. And it's artificial.
The key thing, sort of one of the earliest aphorisms in AI was that we call AI anything we cannot do. Once we know how to do it, we call it a calculator or we call it a search engine or we call it voice recognition or we call it self-driving cars. But the frontier of what we can't do, we refer to as AI. And then when we have it, it's like,
Oh, that, that, oh, that? Oh, that's just a search engine. Oh, that's nothing. Like, that's simple. I'm used to that, right? But of course, it's, you know, Google itself is absolutely remarkable. The reason why is the explicit versus the implicit. A world in which we're trying to design and the world of Lisp and AI proper, the original, the origin of AI by when the coin was termed in 1955 at a
conference at Dartmouth by Marvin Minsky and John McCarthy among others. There was about three people from Britain there, one of them part of the Selfridges family, by the by. That there
It was about explicitly instructing a computer a set of instructions to program it on if I give you a big decision tree, if I give you these instructions to do this, you therefore produce this output. And I've got full explainability. I can see on a piece of paper the program that I've then introduced into the computer. The shift that happened...
Weirdly enough, around that time in the late 40s, early 50s, there were several people, Frank Rosenblatt is one of them who died prematurely several years later in his 30s. They looked at a different technique and that was to apply statistics to the problem, collect the data, have the machine make an inference modeled on the human mind, on the neural network of the human mind with nodes and a decision layer.
Sounds a little bit like deep learning. So what happened is that the grandees of AI, it's like an academic squabble, in this case Marvin Minsky and John McCarthy at MIT and then later at Berkeley, looked at this issue and said, oh, that's never going to work. And they were not wrong, they were right, it wasn't going to work because of the constraints in computer processing power and memory and cost at the time.
But everyone went through the instruction-based of AI, classical, what was called GoFi, good old-fashioned AI, GoFi. And they tried to embed a large decision-making tree on how to make decisions. Expert systems is another term for it.
At the same time, this weird method that was poo-pooed and no one had any trust in around the 80s and the 90s and the year 2000s was a statistical machine learning. Just give the machine a lot of data, let it work out the system for itself and make an inference. You don't need to know
all of the ways in which it can find these variables and co-variables and coefficients to identify why it makes a decision under these conditions. In fact, it's so intricate that it would exceed our human capacity to grok how to do that. But the effect is that it works and it works better than alternative systems.
take images if I show it lots of image if I describe a cat as having hair having ears having a tail having almond shaped eyes and whiskers that's great what if I'm looking at the cat that doesn't have a tail from behind how do I know it's a cat right the answer is I'd have to program and say oh here's the exception what if it's like this like that right it's crazy to come you'd never the real world is such you'd never enough ability to create all these these exceptions give it you know a quarter of a trillion of
Wouldn't be that many, but several million examples of cats under all lightings, all conditions, all kinds of cats. Ask it if it's a cat or not, it can do it. It's statistical, right? That method was totally, you know, not part of the research agenda.
But it showed promise and it was so effective. There was really only three people in the world who were the leading lights and their students who were pursuing this sort of ridiculous ambition. One's named Jeffrey Hinton. The other one's named Yoshi Benjio. The third one is named Yann LeCun. Mm-hmm.
Yann LeCun became the head of AI at Facebook. Hinton at Google. Yoshi Benjio stayed in Montreal in academia, but became an advisor to companies. They won the Turing Award a couple of years ago. The term deep learning and the AI revolution of around 2012 was, and this was,
grace of them. There was other people, Peter Norvig and Russell Stewart and others. The key is this. They reframed AI from trying to give it explicit rules to giving it information and the computer coming up with its own set of rules through an inference. It wasn't explicit, it was now implicit.
What we still call it AI, just because we don't have a better term for it, but it's a totally, totally different technique. And it's been so successful because of the cost of memory.
shot down, the amount of data that we can collect in society became enormous. And the processing power could be super fast, in particular because of NVIDIA's chips that use a sort of matrix algebra rather than doing a regular form of mathematical computation on a CPU. They're using a graphical user interface. So the whole chip design changed. You change all of these different constraints, and suddenly, around 2012...
ImageNet showed, which was a competition on identifying images, could almost do as well as human beings. And pretty soon after, it was able to exceed the ability of human beings to recognize the content inside of an image, whether if it's a muffin or if it's a puppy, and whether it's a school bus with a pink unicorn horn on it.
This episode is very excitingly brought to you by Huel. I've been a paying customer of Huel since 2017, since my fifth year of medical school. And it's absolutely fantastic for those occasions when you need a meal, but you don't have the time to necessarily cook for yourself. Now, these are some of my favorite product that Huel offers. This is the Ready to Drink
which is a meal in a bottle. And each meal is 400 calories with 22 grams of protein, which is pretty reasonable. And each serving has 26 vitamins and minerals as well. Huel Ready to Drink is made from natural ingredients like tapioca, sunflower seed, coconut, pea protein, flaxseed and hemp seed protein. It is 100% vegan, like all Huel meals. And it's gluten-free and lactose-free with no animal products and no GMOs. And Huel Ready to Drink comes in eight different flavors, including strawberries and cream and iced coffee caramel, along with the classics like chocolate and
and vanilla. Now these are widely available in supermarkets and corner shops and petrol stations all around the UK, but the best way to get them is online. And so if you're like me and you have a fairly busy life where you don't necessarily have time to cook or make the time to cook, then you might like to try out Huel Ready to Drink. Also, if you haven't yet heard the episode of Deep Dive with the founder of Huel, Julian Hearn, you might like to check that out.
But if you'd like to give Huel a try, then head over to Huel.com forward slash deep dive. And that link is available in the video description and the show notes as well. And that link firstly helps me out, but it will also give you a free t-shirt and a shaker. And so you can check out Huel ready to drink and you can also check out the other products that they offer. So thank you so much Huel for sponsoring this episode.
This episode is sponsored by Kajabi, and they've actually got something really valuable for all of our deep dive listeners. Now, if you haven't heard of Kajabi, it's basically a platform that helps creators diversify their revenue with courses and membership sites and communities and podcasts and coaching tools. So it's one of the best places for creators and entrepreneurs to build a sustainable business. We started using Kajabi earlier this year, and as soon as we started using it, we were like, oh my God, why haven't we been using this product for the last three years? It's got everything you'd possibly need for running an online course or hosting an online community or building an online coaching business.
And it essentially makes it really easy to run your entire online business from payments to marketing tools to analytics. Kajabi has everything that we creators need all in one place. And actually, you don't necessarily need a huge audience to generate a sustainable income. A creator on Kajabi can, for example, make $100,000 by converting just 350 customers a year, depending on your price points. And in fact, there are creators on the platform that are making millions of dollars every year with fewer than 100,000 followers across the social media platforms.
We've been using Kajabi to host all of our online courses since the start of 2023, from our $1 part-time YouTuber foundations to help people start off on their YouTube journey, all the way up to our $5,000 package for the part-time YouTuber accelerator
which gives you access to me and my team. And Kajabi does not take any cut of what you earn. Creators keep and own everything. The way Kajabi makes money is through the monthly subscription fee. And even though we generate like literally millions of dollars every year from Kajabi, we're still only paying them a couple of hundred dollars a year. And actually in their lifetime, Kajabi have paid out over $6 billion to creators, that's billion with a B, and over a thousand creators have become millionaires through products on the platform.
Now, back in May 2023, I did a keynote at a Kajabi in real life, Kajabi Heroes event in Austin, Texas. And in that keynote, I talked about the exact steps that I use to grow my business from zero to over two and a half million dollars per year from course revenue alone. Now, people paid for the pretty expensive tickets to watch this keynote at the Kajabi Hero live event. But as an exclusive deal for deep dive listeners, Kajabi have very kindly offered to provide the recording of that keynote completely for free to anyone who listens to this podcast.
So if you're interested in getting completely free access to that keynote, just head over to kajabi.com forward slash Ali. That's kajabi.com forward slash A-L-I. And that'll be linked in the show notes and the video description as well. You just enter your email address and then you will get the recording of that keynote completely for free, whether or not you ever become a Kajabi customer. Okay, so...
This is interesting. Like I've, I've, I've never before heard the sort of, it's, it's super helpful to hear like the history of and how this stuff developed because I feel like stuff is sort of slotting into place in my mind. We haven't even got to GPT yet. Yeah, I think that'll come in a little bit. So it sounds, so at the time it's like, like the, the sequential decision tree is intuitive.
It's like, okay, I can see that, like in medicine, for example, I can see that, you know, if you ask this question, you'll get one of four responses. Based on one of those four responses, you can ask this question. And eventually you can imagine like a very big decision tree. And I can intuitively imagine coding that into a computer or whatever, and it can make a decision. Then we get to this thing where it's like, feed this machine a million images of a cat, and the machine will figure out how to classify cat. That
It's completely unintuitive to me. I cannot even begin to imagine how you could solve that.
I can give you a conceptual, a simple way of understanding it. One of the earliest forms of machine learning between basically 1990 and 2000 was handwriting recognition. And so by the year 2000, there wasn't a post office in the world that was read or in the Western world that was reading envelopes. It was simply scanning it through and then knowing where and how to sort them and deliver them. So let's have...
If I write a letter, just the alphabet, A to Z, and then we wanted to identify and we put it on a grid. Yeah.
Like graph paper. And then we wanted to write all the rules of an N being a vertical line, then a diagonal line, then another vertical line, but then there's the lowercase and that little upside down U, the little stem and all that. We could define all the ways in which on our grid that we have what an N looks like, uppercase and lowercase.
But if your handwriting is different than my handwriting, we've got a problem, right? So we'd have to then do it again and do it again, rewrite the rules. But if we take...
everyone in this postal code of London, right? Every, all 25,000 people and have them write A to Z and we throw it into a machine and we overlay that. Nice. Right? Now that we've overlaid it on our grid, we're going to have these big, massive, blurry lines, right? If you imagine, because everyone's going to be a little bit different. No one's going to be exact. It's not going to be the...
or archetypal N. But a machine learning algorithm will see that that's an N and not a W. It's an N and not an R. Sometimes an R does look like an N, depending on the lowercase R and the lowercase N, but it's going to make an inference and it's probably going to do really well. If you have a million of these, suddenly it's going to improve the accuracy. And although sometimes we'll get things wrong, like the R and the N, it's going to have taught itself how to decipher an R and all of the letters. Yeah.
So that same conceptual idea of, if you will, layering everything on top of each other and then coming up with the traits that best predict that as one thing versus another, making just a problem of probability, a big probability table, that's a reframing. That's changeable.
changing the actual approach by which we do it is also incredibly humbling because it presumes that the human being cannot explicitly describe the phenomenon it's trying to accurately predict. There's a saying in the social sciences about latent knowledge. We know more than we can tell.
We know more than we can tell, which is to say that we can't always express something, but we still have it latent, latent knowledge in our minds. This is a way by which, in fact, computers can tap into a form of latent information, latent knowledge to come up with answers. And why I want to stress that, and I'll end here, is that
That humbling is pretty incredible because you might say, well, if a computer can just decipher handwriting, all the better too. It saves us some time. Great. But what if the computer is deciphering the probability that someone's going to have cancer or not? Suddenly that doctor who felt that they were very proud of their education and their grades and their parents' great well wishes when they got their degree and all of their years of experience said,
They're now a little bit knocked off their pedestal. What is their role? I still think they have a huge role to play and their knowledge is still valuable. But you could imagine that diagnostics in medicine is going to go the same way and it actually might become more, not only might, will become more accurate than the human being because it can decipher and it can identify different patterns that we wouldn't even think to consider or we don't actually physically see. Or we can't mentally grok.
The mathematical models that... So if we take the letter grid again, so what I'm imagining in my mind, let's take a four by four grid. And if you imagine writing the alphabet in each one, you'll get a value for is the pixel on or off, basically? Is it like, is there something in the box or is it empty? So like an A would, you'd be like...
So first, think of the graph paper. Do you remember how when you went from high school or secondary school to college and then to grad school, the graph paper changed, right? You had these like big boxes, right? It was never, it wasn't a grid, it wasn't a quadrant four by four. It was like on a page, you would imagine doing it on a full page. You've got like, you know,
30 by 30. But then suddenly you had those 30 by 30, but then you had these like little smaller ones by 10 underneath it. Right? So now you have like 300 by 300. Let's take the three, let's go by pixel. So let's do 3 million by 3 million. Right? And let's now look at the A. Now what you're looking for is an edge. You're seeing dark, right?
versus light and you're seeing now a curve all that awful geometry and then trigonometry that we had to learn yeah it's come back and that's why ai engineers are getting a quarter million dollars in the starting salary and we're not right because they actually remember it and they actually advanced in that so suddenly you see the the system can i is transforming it into just numbers but can actually see that if you will the angle of what becomes an a as well as all of the
permutations and of how it doesn't conform to the perfect angle but is close enough to that to that which it predicts now to be an a that's how it's working and it's looking for an edge and it identifies it in this particular in the rate in relation to every other pixel
on our plane, on our surface. And so the example of having a million images of a cat, it's basically just that, just like more sophisticated because it's more complicated when you have other things going on and it's a cat rather than blank and ABC. Exactly.
Okay. So it was these three guys in the 80s, 90s, early 2000s who were like, this is the way. They held the light. They kept the pilot light burning for using statistics, which then became called not just neural networks, but deep learning. Okay.
What's a neural network? Oh, yeah. So a neural network actually describes our brains, the neurons that fire in our brains with synapses. And how, I mean, way of thinking of it is in our minds, we have these things called synapses, which are little nerve endings, and they connect to each other. And ideas are formed when, what is it, synapses that fire together, wire together. And so you would...
when you see your mother, you can recognize your mother's face in a particular region of the brain and you recognize all the different things and memories come flooding back into you and you can do that all the time. And even when you're not seeing your mother but you're thinking of your mother, with an MRI machine we can identify the same part of the brain firing. So we understand that these are these neurons where memories are stored as well.
I could talk for 10,000 hours about this and the fact is pretty soon you'd realize that we know far, far, far less about this than is there. So it's really an early young science, right? The...
There's more mystery to the mind than answers. But what we still try to do is take this as a rough model, very approximate, almost like a cartoon version, and apply it to an artificial neural network. And what we're trying to do is we're creating what we call a neuron, but what it is is a logistical regression.
But it is taking a piece of data and it is doing a regress on it. But you would have lots of different connections. You have a lot of data and you have you it is it might be unstructured.
if you will, it's not in any particular form. And what you are doing is you are feeding it in and from one layer of abstraction it is, for example, trying to put down a grid and look at the contrast, is it white or dark? And then at the next layer it's looking at a little bit more detail, how dark is it? Is it very dark or is it a little bit dark? And suddenly when you get to the middle of it, it's able to identify things that now look like an x-ray.
And you would be able to see that, in fact, it's able to see what looks to be a femur but with a fissure in it. And is that just some sort of problem with the lens or just some noise in the system or is there a fracture in it? And at another layer, it becomes even more clear to the system that, in fact, this looks like a fracture and it looks like other fractures you've had before until by the end. And we now have these...
with billions and then trillions of sort of connections there, it can then make a diagnosis based on it. This is a very high-level conceptual way of what an artificial neural network would do that kind of mimics a brain, not really, right? I think because there's more of the brain than we know and that this is sort of a plastic version of it.
So we have this deep learning system that when we apply a lot of data, and a lot of it is hand-tuned, there's a lot of trial and error to see if they can get the output that we want, it sort of worked. There were these other little advances in the last decade
eight years or so of AI, one called Transformers. The other is a way in which it takes the information and it does something called back propagation in which it takes its answers and it goes back in time to recalculate everything. We have all of these little techniques that all together led to a way that we could
collect lots of text and by doing what Google was doing starting about a decade ago of autocomplete, you start typing a search query and it completes what it should be, we could start doing that with first sentences and then paragraphs and then entirely new text. And that was what GPT did.
There's a lot of other innovations as well into it, but it became a complete game changer that it could be so effective, almost magical, right? That when you go on to chat GPT and then GPT-4, you can give it a prompt and you can get a pretty extraordinary answer back as well. And that's where we are today. And that's really in some ways a whole...
A whole new beginning. What is a transformer? Like, what does that mean? Man, I knew you were going to ask. I am not going to try to explain the transformers. I was thinking I could watch a YouTube video about it. Let me give you a... You don't use words, we use tokens, which are like the roots or suffix and plurals of words. And I'm trying to...
And then you just re... It's a process of recombination. But it's pretty... I mean, I used to... I had been thinking about it like a year ago. But it is pretty remarkable that you can bring these tokens together and just predict ahead to such a degree that you get answers that are amazingly appropriate. Let me... Maybe the most important thing to...
keep in mind when you're using GPT is what the system is doing and what it's not. GPT is not giving you the answer. Remember, going back to the history of AI, it's an inference.
GPT doesn't actually know anything, right? There's no answer to be had. It's like the 1, 2, 3, 4, 5s or the handwriting recognition, which there's no archetypal thing, but it's making an inference of what this could be.
GBT is giving you what it thinks a correct answer looks like. It's not the thing itself. It's not the answer. It's a simulacrum. It's a reflection. It's an inference.
It's like looking through two stages of a mirror. It's what it thinks. It predicts the right answer resembles. That's going to be really important because when people say, well, it hallucinates, it's not accurate. The point is like accuracy was never part of its mental model. It doesn't have a mental model. It's not a human being. It doesn't have a sense of causality. It doesn't have any innate knowledge. It doesn't have any morality, so it can't even be held accountable for its problems. It's simply a
It's a coin toss that it says, I bet we're going to toss this coin in this sequence, in this order. And lo and behold, it did. So what's...
It seems like there have been a lot of advancements recently. And it seems, at least if I look on Twitter, that the pace of development in AI is accelerating. Is that fair to say? Or is that just what we've been talking about? It's incredible. And it's always, for the last 10 years, 15 years, it's been accelerating far faster than the leading people in the field thought it would.
That to me is the most remarkable thing. Things that we said in the book Big Data or the things that I gave in talks five years later that I imagined was going to be coming down the pike within my lifetime happened within two years, five years. Like I blinked and we had Alexa and voice recognition, right? I blinked again and we had GPT. And I'm suffering from more whiplash than anyone else because I was –
There I was present at the creation in terms of a modern variant of machine learning and I'm stunned by the progress I'd love to talk about some dangers of AI But before we do that, let's talk about some benefits And there's a quote from your book who is which is that as the technology evolves many look to AI to remedy social ills That people have shown themselves to be unable to address What do you mean by that? So
When we look at the world and we realize we've got all these problems, I think a lot of people are going to realize, people have already realized that in fact the biggest problem is us. We are cognitively incapable to address what's in front of us, whether it's climate change, whether it's inequalities, whether it's this data duty of care of transforming the world so that we apply information to solve our problems.
So, what we... So, the whole... And Daniel Kahneman and others have pointed out, he's the cognitive psychologist who showed us about our cognitive biases, have showed us all the ways in which our decision-making is improper, a short-term bias, an immediacy bias. So, we'd rather, you know, have jam today than jam tomorrow. We'd rather have... Go on... Everyone wants to go on vacation today and while we're heating up the earth, that's gonna make our cities explode into firestorms tomorrow.
So, maybe we need an algorithm to solve that problem instead. Maybe we want to hand off some of our decision making, whether it's in the area of public policy, not just in the area of marketing, to an AI algorithm to do better than the human project because it looks like we're going headlong into our destruction.
I disagree with that. I think we need to apply AI in lots of areas in society where we can make improvements. And in fact, as you know, I believe we have a moral duty to apply AI in those ways. I think we wouldn't be a wise society if we didn't. However, I also think that human beings have a mental flexibility that AI systems never will. And we also have a sense of higher purpose and deeper meaning and a...
a moral guidance and individual transcendence and respect for the collectivity that we can't presume an AI would. And as a result, I believe that we need to get better at being humans and tap those deeper areas within us to see the world with the right frames, through the right mental model, one that respects the dignity of humans and the long-term viability of our species, and to improve the world
applying AI rather than handing off our cognitive functions to AI. Okay. So it's almost like, just to draw another analogy that...
People should use computers in their day-to-day work because it just allows you to do way more of things. And it would be a bit weird for a government agency to insist on not using computers because they want to use pen and paper to make all the decisions. It would just be a bit bizarre. Like, we've got computers. Come on, let's use it. Sounds like your view of AI is that, hey, this is sort of like computers 2.0.
We've got all this data. Let's use it to inform our decision making. Or as fundamental as electricity or as fundamental as numeracy and the alphabet. Right. Like we wouldn't like French mimes try to mime, you know, what we want to do to each other, right? We would just use language because we have language, right? Well, just because it took us several thousands of years and we've had it for years. We can imagine that AI can...
will someday become as foundational as these other tools or technologies like language, numeracy, mathematics, electricity, computing. And so given the pace of development in AI, it sounds like you don't think that it will, within the next few years, develop the mental flexibility that humans have.
No, I think that's true. It will not. I think AI is still going to astound us. AI will still be able to do things that human beings can't do. It already can. So we should definitely apply it and use it. But we need to be...
the controller of the AI rather than controlled by AI. We need to embrace the AI era and again we need to apply AI in all the ways in which we can but we need to make sure that we don't give up our decision making processes in this and that we aren't simply the object of the machine but we remain sort of subjects in this universe and that which is sacred in this universe. This is
The question is, is this a tool or is this an organism?
Are we giving rise? Have we created a souped up hammer or a light bulb? Or we created a new entity, a new organism that grows and develops on its own? The answer is it's a little bit of both, right? However, I don't think that we should take the Frankenstein monster. That's actually pejorative. I retract that. I take that back. It's not a Frankenstein monster at all. It could be. We could make one. But let's just say we don't want to take this monster
organism that has some features of self-development and autonomy and to
treat it with the equivalence of a human being or of a leader, of something that's greater than who we are. I think that's just really dangerous. And the reason why is that we as human beings have an intellectual flexibility that the AI does not have, in some ways cannot have, because it's not alive. And we are alive. It's inanimate and we are adamant. And by being alive, I think we participate with something great
greater than ourselves. That's something that is that we don't understand, but I think that the only way to understand it is on a spiritual level rather than a purely rational level. And it is this fabric, this weave that we share with everyone alive with us, with our posterity and with our ancestors. And machines don't have that. And to think that we can have this just because we have this machine that can actually do so much more than we can on an intellectual level,
doesn't mean that it should be doing everything because we are more than our intellect. We are also our spirit, for lack of a better term. I like that the conversation has gone in this direction. So I've been exploring, well, somewhat dabbling in the whole spirituality stuff recently as well. And it's just like, I think I'm very much someone who lives in the mind and very much struggle to feel even like the emotions in my body, for example. And so now as I'm reading, you know,
Sam Harris stuff, but also other things around
spirituality, that there seems to be this plane of existence that is just impossible to put into words, this thing that we call the spirit or the soul or any of this kind of stuff. It sounds like you're a strong proponent of that view as well. Completely. Not because I particularly ascribe to one religion or another, but because it's right in front of me. I see it everywhere I look. William Blake famously wrote, "All that is living is sacred."
And you think, well, yeah, no, duh, of course. Like, isn't it? Like, who would make the case that it isn't? But then you look a little bit deeper and you realize, holy shit.
There's a lot of people who are so wrapped up in their rationality and their scientism and their logical thinking and their quest for rock-solid empirical evidence, that which is, you know, seeing is believing, that which I can touch is all that exists, who fail to see that which is invisible.
and I use the term only in the Saint-Exubery way. Antoine Saint-Exubery was the person who wrote Le Petit Prince, and he wrote, « Ce qui est essentiel est invisible aux yeux. » That which is essential is invisible to the eyes.
And what he, of course, he's talking about there is that emotional connection to things. But it could also apply to that, which again, we've got terrible language for it. We'll call it the spiritual. It seems to have these sort of echoes of...
of religion and it need not, right? Because the ancient Greeks had the idea of several ways of interacting with the world and thinking about the world. And one was logos and the other one was mythos. And logos was obviously logical thinking and mythos was tapping deeper meaning of not myth in terms of that which is
false, but mythos as in that which we can perceive but is beyond our rational approach of senses. And between the two forms of truth, the Greeks thought that mythos was the more true and that logos was simply a tool. After all, any lawyer in a courtroom can apply logos to win his case, but does that necessarily lead us closer to truth?
That's debatable. So I think that post-enlightenment, now it's getting very grand, I accept, but post-enlightenment we have deferred most of our acceptance of thinking and our conceptions of the universe to the logos at the expense of the mythos. And of course that's what
William Blake was railing against in so many other people. We've bastardized the term imagination as something that is almost like fantasy when in fact imagination is really about, is pure Copernicus and is pure Pythagoras. It's about seeing that which exists within our mind's eye, not with our physical eyes. Why this matters so much for us right now is look at AI, look at its decisions.
It's all logos, no mythos. C.S. Lewis, the famous writer and philosopher and Christian theologian in some ways, identified in one of his articles men without chests. And what he meant by men without chests is that the people who
are living with their appetites in their stomach, are living with their minds and their intellect, but are missing that which is in their chest, their spirit, their idea of transcendence, their idea of selfhood, their idea of a soul. Again, terrible term because it has those connotations of religion, but it need not. One can be secular, but see a form of
I don't want to say divinity within ourselves, but a special celestial fire, something special that exists that I think in our most quietest moments we see as true. And in fact, in those special moments, we recognize that the reality is those moments of quiet, solitude, and peace.
And that what is actually artificial is the day-to-day leading our lives, bus passings and plumes of smoke hitting our nostrils. Yeah.
This season is once again being sponsored very kindly by Trading212. Now, people ask me all the time for investment advice because they see that I've made money and I've made videos talking about where I'm investing that money. The thing that Warren Buffett and basically everyone who's sensible in the space recommends, which is to invest in broad stock market index funds, which you can do completely for free using Trading212. Trading212 is a fantastic app
that lets you invest in stocks and shares and funds in a commission-free fashion. And they've got a bunch of features which are really helpful, which is why I personally use Trading212 to manage a portion of my portfolio. So firstly, they've got this great pies and auto-invest feature. So if you're interested in potentially getting into investing, what you can do is you can browse the different pies that different people have created on the platform. So you might get like a hedge fund trader who's gone onto the platform and has created a pie of investments, having done a bunch of research and stuff. And that pie might be like, I don't know, 20% Apple, 20% Tesla, 10% this, 10% that, but
generally way more complicated than that. And you can see the performance of that particular pie of stocks and shares and funds. And then if you want to copy that pie into your own account, you can just copy and paste it directly in. And then you can invest any amount of money and it will automatically split it according to the allocation in the pie. So if you wanted to just play around with a hundred pounds and you were like, okay, that pie looks good.
it will split out that £100 based on the allocations of the Pi, which is pretty sick. They've also recently added support for multi-currency accounts. Now this is really helpful because, for example, if you invest in the S&P 500, which is a US-based index fund, then you won't get hit with all the various foreign exchange fees if, for example, you're investing from the UK like I do. And if you have an Invest or an ISA account, then Trading212 also gives you daily interest on your uninvested cash in pounds or euros or US dollars.
So if any of that sounds up your street, then do please hit the link in the video description or in the show notes. That will let you sign up to Trading212. And if you use that link, you will also get a completely free share up to the value of £100. So it's literally free money, so you might as well. So thank you so much, Trading212, for sponsoring this episode. We've got a bunch of questions from our podcast community around risks that AI has. So firstly...
To what extent do you think AI could spell the end of the world? To what extent is there an existential risk or existential threat from AI? It's non-zero, but the more that we talk about it, the less likely it becomes. The idea that we're going to hand over our launch missile codes to an AI and then go to the beach only to find out that there's a mushroom cloud on the horizon is pretty remote.
On the other hand, in very miniature ways, we need to be vigilant. So we wouldn't want to hand off... If we have a trading system at a bank, we wouldn't want to hand it off all to an AI without a human somehow in the loop, particularly maybe as a control switch. So if things start running amok, we can have someone look at it and realize things are indeed amok and they can slow it down and control it. Likewise, in the area of military defense, it would be actually...
completely heinous for us to have systems that are autonomous, lethal systems, except possibly in, well, first in defense, potentially, and also in ways that are very, um,
are very circumscribed. But I think you still would want human beings somehow in the loop. Reaction time becomes a problem because wars will be fought or combat will be fought at a quicker pace. But I still think we probably both sides have an interest in slowing it down
Therefore, we need AI arms control to make sure it does get slowed down in the same way that every country in the world can use gas weapons against any other and no one does because they realize that there's first conventions to prevent it because they realize just how heinous that would be. I think we need, I'm a Kissingerian in that sense. Henry Kissinger wrote a book about this with some co-authors.
I ascribe to that, which is to say that we need to research these weapons because we can't give up this ability to an adversary. However, we need to immediately engage in AI arms control so that we have some form of a playing field in which we have some degree of confidence of how people will respond to circumstances and what we think we shouldn't pursue for the sake of our common humanity. Yeah.
And I guess there are reasons to be optimistic here, like to your point about like gas weapons, that anyone could use them, but they don't. Because that's like crossing some sort of invisible line, or rather most people don't because it's crossing some sort of invisible line. But so we can be a little bit
grateful for that, but we can also be a little bit more terrified still. And the reason why is that the wise men and women, mostly men, who signed the Geneva Conventions were ones who could look to their left and right and see their kids that they went to high school with, secondary school with, blinded by chemical weapons or not even there and dead. And they know they're grieving mothers. They understood something about the world that we in our cosseted
rather pathetic TikTokian lives have forgotten or never knew or don't know or in people who don't care, particularly for younger people, you can look to elders who just no longer care and who no longer see it, never were part of it, never part of their narrative.
And if you don't actually have it in your DNA, in your epigenetics, you don't feel it, that it's not coursing through your bloodstream, the absolute abject horror of either war or intolerance or the absence of freedom or the denigration of human dignity.
If you don't have that in your fucking body, you don't know it. And if you don't know it, you're not going to make decisions based on it. And I see it every day. I can't look at contemporary American politics without realizing how the hell did that happen. And the reason why is that if you come from a different narrative, one of fleeing oppression, one of being of incredible fright of annihilation,
you see the world differently and you're a little bit more vigilant and you're a little bit more ready to find mechanisms, what the definition of law is, the wise constraints that set men free to find the way of having the rule of law to ensure human dignity, everyone's dignity.
Why is this? So you mentioned the TikTokian stuff. What is it about modern society that makes us, in your view,
less good at this than, for example, our grandparents were when they signed the Geneva Convention? Yeah, first great-grandparents. So I don't want to sound like an old fogey, and that would be rather pathetic, and also probably not who I am. But I think that there's first been definitely a lack of sacrifice. Modern generations, this is going back about 50 years, right, since post-Vietnam, was the last time that a generation had to really confront
being a pawn or an object in this world and not a subject in this world, having someone else push one in an environment where they didn't want to go to serve someone else's aims.
So that's been a long time. So we're in a world in which there's been an infantilization of a lot of our lives. Secondly, we've got these creature comforts that we never had before. We're no longer dying of a blister or of a scratched thorn from a rose, which was actually very commonplace only 100 years ago, literally 100 years ago.
In the book Framers that I wrote after Big Data, we begin the book by talking about how the president of the United States' son got a blister playing tennis on the White House lawns and perished, you know, died of septus within a week, right? That his wealth and status couldn't save him. No penicillin.
And then thirdly, our attention span has definitely decreased. And as a result, we are no longer holding on to substantial ideas. Again, this isn't old fogeism. You know, there's a lot of social science. Jonathan Haidt, among others, have pointed out these pathologies.
And a lot of the problems have emerged actually since the advent of social media. Every kid knows this. We're all a basket case of depression and anxiety because we're so well connected in a way that we hadn't been in the past. So people are talking about a digital detox. These are new tools and we need new social practices around them.
Just like in Asia between 1950 and the year 2000, it went from famine and undernourishment to obesity being the healthcare crisis. We have the same problem with our digital tools after 25 years of the internet, so to speak, and the web. Rough hand. To say, well, how do we best...
bring this into our society. There's the famous cliche of American diplomat asking the Chinese, what do you think of the French Revolution? And the Chinese saying it's in the 1980s or 1970s. And the Chinese leader replying, it's too soon to tell.
200 years later. What do you think of the French Revolution? It's too soon to tell. It turns out it's apocryphal, it's a mistranslation. But it's a joke. The joke at the time was the Chinese thought in such a long-term mindset that something that happened only 200 years ago is too soon to tell. There's also a nice aphorism that in Britain, you know, for British institutions, the first 500 years is always the toughest, right? That long-term thinking that we need to have
The tools that we're using, like social media, are only 10 years old, 25 years old, and we need to come up with the right practices. In general, I think that we are a very cosseted generation that has that in which...
Life and politics and social relations resembles more a video game than actually something that's real. And as a result, you can have flame wars without thinking of what the consequences are. You have... And trolling without actually realizing these are actually human people, human beings. And again, we've lost the sense of tolerance, equality, and human dignity. Human dignity as in... Well...
Yeah, I stress that because I've been thinking about the French Revolution's great aphorism of liberty, equality, and solidarity. And I've been thinking about the concept of equality and human freedom. And I think that there's both of them are – I've not written about this yet. I'm still sort of working it out in my mind –
Freedom and liberty isn't quite right because we're not free. I mean, we're not equal in the sense that we're all born very differently, right? We don't have equality. So people want to use the term equity. What does that mean?
And then freedom, sometimes we have too much freedom, right? In fact, the whole point about civilization is we curtail our freedom, right? We can't live together unless we curtail our freedom. John Stuart Mill famously is telling me that my freedom to extend my arm ends at the beginning of your nose, right? Yeah, exactly. This idea is no harm. So I wonder if maybe we need to introduce a new concept, right?
It's an old concept too, but of dignity, of an inherent dignity that we treat with each other and that my right to be treated with dignity is predicated on my treating you with dignity and vice versa. And that means that people like Steve Bannon, Donald Trump can't do X and Y to me, right? Because I am owed by...
dint of being a living being, I'll first say a human being, I'm owed a form of dignity. Then if we extend it to a living being, owed a form of dignity, maybe we don't then have these factory chickens and things. And then eventually do we have to get rid of farms because we can't have tomatoes? I think that'd be going too far. But I do think that I'm captivated on the idea of if
150 years ago, there was slavery and intellectual justifications of slavery and the rule of law that enshrined slavery. And today that seems batshit crazy. What is it that in 100 years people are going to look back at my generation of things that I do and say, how the hell could you have done that? Like, what were you thinking? So one example is like, why would I buy a plastic water bottle? Like, would I really buy water in plastic? Like,
Maybe that's actually heinous, and I'm not realizing it. But I think there's a lot of other things. And the area of dignity, I think, for me, might be that sort of ground truth concept that if we accept that we all have, that we then can rebuild our society in a way, in a form that's just and honorable to everyone. Mm-hmm.
It's quite an old-fashioned concept, or seemingly old-fashioned concept, the idea of dignity. I remember a couple of years ago I read, I think it was The Righteous Mind by Jonathan Haidt, and how, I think in that book he poses that scenario of like, you know, is it morally wrong to have sex with a dead chicken?
It's very hard to make a case for that in the liberal morality worldview where it's just about, you know, harm and consent and stuff. It's like, you know, the thing's dead, you're going to eat it anyway. It's like, I guess what's the harm? But the way you do explain that as being morally, perhaps morally wrong is through the idea of sanctity and dignity and these words that seem...
seem so old-fashioned that it seems unfashionable to use them as a way of saying like, oh, Donald Trump's behavior was undignified and therefore it was wrong kind of thing? I wouldn't go, I wouldn't, I don't, the undignified thing I don't have much time for. I think that might have been, it was undignified, but so what? Like, I mean, I mean, the, I don't regard dignity like that. It's more of a deep respect for the human being and for the
the sacredness that they have. The sanctity of a human life and the world, like living things. Yeah, I'll give you another example of this, which is if you look at all the AI papers that are coming out in the last few years, they have typically about
between eight and about 30 co-authors. But I love looking at the names of the co-authors because you have this hodgepodge of the entire world. There's going to be a guy like in the most recent paper on consciousness that came out, we're recording in late August 2023, there's a big paper on AI and consciousness and there's one guy like Peter West
There's all of these other names that, and I like that because it's a very Christian name, right? And Western, you know, Anglo-Saxon name. But there's all these other names that you, some you can say, oh, that's Chinese or that one's Indian or subcontinent, but you just really have no idea where it comes from. You think, whoa, like what's that about? With just cues and lots of consonants. And so what I love about this is like it seems to me an absolute mystery
affront to racists who somehow like how you can have any sort of foundation of racism as in a person has a different skin color like literally their gradation of melatonin differs and therefore
There's something out like you're going to treat them with less respect or you're going to think less of them or they're ipso facto less intelligent. Like it's – from an intellectual standpoint, it's ridiculous. Racism is a little bit like slavery. It's a little bit like all these other weird things that we no longer do but were done thousands of years ago.
I can't think of anything worse than owning another human being. It seems to me so fantastically peculiar that we would have people who would still have as a mental model something that just seems so ridiculous. It's nice to see in an AI paper that the racists are on the lower end of achievement in the world. Maybe there's causality there, while you've got this global hodgepodge of individuals
who are at the most exceptional level transforming the world in terms of this concept of progress and fulfillment and betterment. So it's a lovely thing. So the dignity that I'm referring to is this dignity that adheres to every individual, qua individual, by dint of being a person in this world. I mean, it's such a cliche, but it's worth repeating that. Like you have 8 billion people in the world, each one is different and individual.
The way I think about it is that it's like traffic. You're not in traffic. You are traffic. And likewise, we're not in the world.
We are the world, all of us, all of us together. We're all interacting with each other almost like atoms or electrons bouncing across each other, creating the world that we have. Hence the second book, Framers, is all about mental models and the power of mental models and the limits of AI because we're all in this together and we need to respect each other's mental models and ways of thinking in order to solve our problems. Nice.
We had a bunch of questions from our Telegram community around people worried or asking about to what extent AI runs the risk of "taking people's jobs." I want
I wonder what's your take on it? Absolutely. It's going to take our jobs. Yeah, totally. Not everyone's job. I think there's always going to be work to do. Whether that's well-paid work, whether that's satisfying work, it's up to us to create. But again, we're agents. We have agency. We're subjects in the world, not objects in the world. If we think that we're going to simply be victims of...
we're the pinball in the pinball machine of AI and we're just going to be bouncing around from things, then that's what's going to happen to us. But if not, we can recognize that we have agency. We can direct this in the ways that we want to direct it and up enshrine human values, in this case dignity, in ways that we think we need to have it. Work is definitely going to change. Everyone's work is going to change. But that's happened with electricity, that happened with computers, that happened with
Obviously, with the slide rule. We need to reimagine, and I think it's going to be tapping our imagination, reimagine what will bring value to the world and then do that. Subjects rather than objects. I guess the classic example is, oh, but what about the truck drivers? The truck drivers' jobs are going to be replaced with autonomous vehicles.
Trucks and stuff like that. Over time, sometimes. But I think you might still want someone in the cab. I mean, if the truck driver can sleep during those long stretches across Nevada and the Dakotas, all power to them, right? They can be in the cab sleeping and doing things, playing a video game, interacting with their children and wife by...
by Zoom. Sure, that sounds great. But then we might need them for edge cases for the last mile. People are saying that's a likely place we'll need them. We might want them to
simply have the transactions when they arrive where they're going so there can be a human to human legal receipt of goods. Eventually that might go away and it's all going to be automated but it might not in the relevant timeframe. But there's very few people in the world who aspired when they were young and five years old say, I want to be a truck driver alone.
In my cab, going through the Dakotas on Adderall and Coke and coffee, trying to stay awake and not crash my cab and getting somewhere with a dispatcher breathing down my neck saying I'm running late and watching the fuel. Right? Like nobody says that's the life I wanted. Yeah.
So why vaunt the truck driver? I think in the short term, I don't think in the short term the jobs are going to go away immediately and I think there will be a transition. And that has always been the case. Sometimes those transitions have been, in the history of technology, have been very violent and very difficult. The Looms and the Luddites, Ned Ludd. But
I think that we've learned from that that we're going to try to shape this technology around human needs and human ambitions. And even the five-day work week is an artifact of modernity because 150 years ago it was a six-day work week.
The five-day workweek was a reform of industrialization so that we could have two days of leisure, not just one. The trend these days is to actually consider the idea of a four-day workweek, which I think is the direction things are headed in. We've got a nine-day fortnight in our team. Every other Friday is a day off, which people seem to like. Yeah, exactly. A final thing I wanted to ask you about. You talk a lot in the book around the human advantage over AI.
top level and people can read the book to find out more, but what is the human advantage over AI? What do you feel are the things that humans can uniquely do that AI can't/won't be able to do for at least a while? There's three things in particular. The first is causality, the second is counterfactuals, and the third is constraints. Causality is this idea of cause and effect, this inherent and innate ability to understand cause and effect.
Machines can do that based on data, but humans can do that more than just data. We have a sort of instinctual ability to see that. And also, more importantly, we're primed to see cause and effect. In fact, we're so good at it, we see cause and effect where it doesn't really exist. We're constantly mistaking causality and getting it wrong, but it makes the world a predictable place and a repeatable place. Machines have to be trained in causality. It doesn't come natural to them. The second is counterfactuals.
We fill in the blanks. We see what isn't there. Literally, you have never had the experience of a hot furnace pouring molten metal on top of you. But you don't need to have that. You know to avoid when the lava comes pouring, gushing out of the steel mill, you run
for your life because you know what's going to happen. And what you've done is played the game of life two ticks ahead by filling in information that you don't have based on information that you do and that is the sense of counterfactual reasoning. It's a magical orchestrational form of cognition that machines don't have.
have, other animals don't have, but we do have. And then thirdly, we've got constraints. It's not enough to simply live in a world in which anything is possible, particularly imaginationally. We can render anything fantastically in our minds, but in fact that we constrain it in certain and wise ways to the most appropriate mechanism to get done what we need at a given time so that we can act in time. These three features of cognition
Causality, counterfactuals, constraints are the three sort of building blocks of our mental models, of our frames, of the way in which we see the world, one way versus another. And if we apply our mental models, which machines can't do, and we adjust them and we reframe situations based on the here and now and the requirements of a given time that might be different than an earlier time, we can actually solve our problems. Okay, nice.
That's good stuff. One sort of unrelated thing I wanted to ask you about, what's it like working at The Economist? It's like, I've got the app, I read it occasionally. I sort of always aspire to read more of it because I'm like, oh, I want to be more informed about the world. And then like life gets in the way and like, ah, screw being informed about the world. But when I saw that you were like a deputy editor, is it? Yeah. Like what does that mean? And like, what is the organization? Like what is it? Yeah.
There's a lot of questions there. We've got a whole other podcast. So the Economs is a lovely place. I had worked at a place called the International Herald Tribune in the 90s. I then worked at the Wall Street Journal around 2000, 2001, 2002, and in Hong Kong. And I...
And I really felt that after 9-11 that I just couldn't be a journalist anymore. I didn't like journalism. I thought the media was cheap. It was simple. It was simplistic. It was always a simulacrum of reality, not the real thing. Everything was fake news. There's an element to it because, of course, you're simply representing what the issue is. You can't tell the whole story because of deadline time and also because of the size constraint and also because you don't know all of it at any given time. It's just hard to do.
And so I had a fellowship at an American university and really was sort of defining myself as not being a journalist anymore. And I realized if I was to work anywhere, I'd work at the one place that I didn't think was journalism but something different, something apart, and that was The Economist. Which is to say it had a mission that was different. It was thoughtful.
It was patient. It was very balanced. It was very partial as well. It had its values and it really pushed forward for its values. So it didn't have this false impartiality that I saw particularly in the American media that was trying to be so even-handed that it often didn't get to the actual truth of the subject.
And so luckily after writing a few pieces for them, they brought me onto the staff and I started writing for them. I then became an editor for them and now as the deputy executive editor, I look at all of the ways in which we're applying our brand to our different business activities to see that it adheres to our values and we can sort of raise our game in all the ways in which a modern media company goes to market, whether it's not just our journalism, but it's more importantly our industry.
As importantly, our events, we have economist education in which we have a great education team that's offering courses to people. We have a research arm that does extraordinary work there. And it all needs to somehow be unified.
An organization needs one person to both be the lion tamer, cracking the whip, as well as the great champion for these other business units. That's my job. How big is the organization? How many people? At the newspaper, there's probably about now, I'd say, about 250 people in editorial. That is, depending on how you count it, but it's...
as reporters and editors, as well as the social media team, as well as the film team and the podcasting team, as well as the production people, as well as the graphics department, the data journalism, the photo department, etc., and the illustrators. So that's at the widest level. But then we also have a commercial side that will do advertising and other things.
But at the Economist Group, we've got about 1,600 people because we've got an events team, we've got the educational team, we've got the research arm, and the research arm does things like healthcare data, among other things. And so we've grown, I've been there for 20 years, to become basically from a weekly newspaper to magazine, calling itself a newspaper, to a diversified media conglomerate.
And that's been glorious. It's hard to do. And not many people have been able to maintain it. It's also hard to maintain because you need to be very, because the world's a complicated place and you need to make sure that you're, how you're interacting with it is in the right way. Have you seen in the last 20 years that, you know, the sort of
top line thing often people say is that, you know, journalism is dying because people are reading things less and the TikTokification of the world means that no one wants to read a long form magazine anymore. Have you seen that or is it like, how are you guys thinking about that? So we're in a world in which multiple things can be true at the same time, right? Remember even in the dark ages,
You had monasteries, you had monks and scribes, right? In fact, you had a flourishing of the sciences in the 1500s and 1600s, the very time when you also had barbarians like lopping off people's limbs for fun. So I think we're reverting to a similar era in which, sadly, the world is bifurcating into those who are going to lead their lives without much of a degeneration.
deeper care of how the world is unfolding other than to complain about it and be a victim of it. While at the same time there's going to be this layer of society that understands that they are agents and that they can, they're stewards and they have agency, that's how I should say it, that they are blessed with the resources, both monetary and cognitively, and the responsibilities
in their professional lives that they can change the world and that this is their moment to do so. And they need to be armed with both information as well as a community to make those changes. And that's what we're there for.
I don't think we as an institution have thought about it that way because we're not very introspective. We just get on with our work and just do the best we can. But again, if there's one guy there who can step back and see it from – I think there's several people who step back and look at it at a deeper level.
I'm one of those people who are thinking about it, who's realizing that even as media, the economic model of media is collapsing, even as there's a societal shift towards less attention span and reading less and thinking less about ideas, that there is going to be what Arnold Toynbee, the historian, British historian, called the creative minority.
He described it as those people in every era who step forward to save civilization if it is to be saved at all. And we are in a similar situation right now. Again, ever thus. We've always been in that situation, but we're again pressed with it now. It's life or death stakes.
that we need to have that community think deep and hard about the issues that we're facing, the problems that we have, and new solutions to those problems. Climate change being one of them, inequality being a second, rise of AI and how we control it being a third, and chemical weapons, tensions in the Indo-Pacific, you name it. There's no shortage of problems that we have, and we need
all hands on the tiller, and maybe most importantly, the environment that we're in, the rise of national populism and the lack of tolerance and respect of human dignity. Bring it all together, and we need people of good values to think through our problems and apply rational thought to solve those problems. That's what we're there for. Nice. Okay, this is also a somewhat random question, but you, I think, are the best person to answer this.
I would like to become more well-read about the world and stuff. And one thing that has really struck me in our conversation, you're pulling out references from all of history, and I'm just like, whoa.
It's just incredible how I'm from a science background and now I do this productivity stuff. So I'm very familiar with science-ish and medicine and the world of self-help. But when it comes to anything resembling the social sciences or history or geography, even understanding where certain countries are, I'm like, is that country South America or is it in the Middle East somewhere? I feel so ignorant when it comes to things outside of my areas of expertise.
And I've always had this thing over the last, like since, since like medical school where I was like, Oh, I really want to read The Economist. I want to be able to read The Economist and understand what's going on in the world. And I'll, and I'll read the sort of this week in brief or whatever it is for a few weeks in a row. And I would just become overwhelmed because there's like, there's too much shit going on. And it's like, I'm having to Google every other word to be like, where's that country? Who is that person? And then you guys released the espresso app a few years ago. I was like, Oh great. And even then it's just like, there's a lot there that seems to require a
a lot of background to really understand it. So I guess my question for you is, as a noob to the world of social sciences and current affairs and politics and history and things like that, what would be a good starting point? Start a guide to understanding the world. I'm so glad you asked that because I think that there is a way. So first and the most important thing is that no one should despair and feel depressed and think that because they
don't feel that they have what it takes to sort of think through a lot of these issues that it's just it's futile and they should just they get paralyzed with their unease everyone has to start from somewhere and although and it's never too early or too late you know
to do so. The second thing is that these baby steps add up pretty quickly and eventually you're in a different place. And that was my own narrative academically and sort of thinking about the world. And so I can speak with the authority of both failure and success to say that, look, you know, if even a clown like me can make good, anyone can, anyone can. So, yeah,
What I would, I think, and the second reason why I'm so excited by the question is because I think there is a simple answer. At The Economist, we have a podcast, a daily podcast called The Intelligence Podcast.
It's not the intelligence like we're snotty elitists. It's the intelligence like an intelligence briefing, like intelligence services, which is sort of 30 minutes of three different little stories, one larger one and then two all subsidiary ones that will give you a general panoply of what's going on in the world in a very accessible way because it's we're interviewing our own correspondents and the journalism that they're producing that gives you sort of in miniature a
sort of all you need to know to understand the world. And because it's a daily show, you're almost getting an audio version of the best that's in the magazine. So if you just were to do one thing, and that is listen to the intelligence every day, I think for the moment it is free. I think it's going behind a paywall at a very inexpensive price, by the by, starting in the autumn.
But still, I think it'd be worth it because the whole idea was in fact with our podcasting strategy is to take a lot of our material that we were giving away for free to create a scaled back version of The Economist for those people for whom it fit their lifestyle. There's lots of people who want the weekend read, who want to – you can listen to every single story that we publish because we have professional broadcasters reading it out on our app. So if –
I know lots of people who quote unquote read the entire Economist just by exercising three days from Thursday to Sunday night and on the treadmill they're listening to the whole weekly edition of the Economist. They have the paper version, never pick it up. Kids get it. However, the intelligence as a podcast is probably the right thing for people to listen to. After one month, if you're still listening to it, great, you're doing great. If you're not, we're not right for you. That's fine too.
Nice. All right. I mean, that sounds like a very reasonable thing I can integrate into my life. So I'll try it out and I'll email you and let you know how it goes. Please do. Good. Good stuff. Ken, thank you so much. So we've been recording for almost two hours. Any final pieces of advice or words of wisdom for anyone who's gotten to the end of our
End of our conversation. Oh my goodness. Absolutely not. I mean, the whole idea is that people who have final pieces of advice and words of wisdom, they're always concocting something on the fly. It's usually pretty meaningless. And we're all...
scrambling in this world trying to make sense of it. Nobody has the answers. And so maybe that's the thing to listen. The final words of reminding of myself that might be useful to others is to know that I don't know what's going on, but I still have to make the best of it. Nice. Great place to end this. Thank you so much. Thank you.
All right, so that's it for this week's episode of Deep Dive. Thank you so much for watching or listening. All the links and resources that we mentioned in the podcast are gonna be linked down in the video description or in the show notes, depending on where you're watching or listening to this. If you're listening to this on a podcast platform, then do please leave us a review on the iTunes store. It really helps other people discover the podcast. Or if you're watching this in full HD or 4K on YouTube, then you can leave a comment down below and ask any questions or any insights or any thoughts about the episode. That would be awesome. And if you enjoyed this episode, you might like to check out this episode here as well, which links in with some of the stuff that we talked about in the episode. So thanks for watching.