Hey, TED Talks daily listeners. I'm Elise Hu. And today we have an episode of another podcast from the TED Audio Collective handpicked by us for you.
The UN recently announced a report on global AI governance. On the TED AI show, creative technologist Balavel Sidhu sits down with political scientist Ian Bremmer to discuss the findings. You might remember my conversations with Ian on TED Talks Daily before, but in this special episode, Ian and Balavel help us understand why this latest news matters and what we can do to make sure AI is on the path to doing good without unleashing its dark side. ♪
Stick around for the conversation and find more of the TED AI show wherever you get your podcasts. It's all coming up after a break.
Support for the show comes from LinkedIn. Where are my B2B marketers out there? If you are a B2B marketer, you know how noisy the ad space can be, which means if your message isn't targeted to a specific audience and the right audience that you're looking for, it can just easily disappear into the noise. But with LinkedIn ads, you can precisely reach the pros who are more likely to find your ad faster.
relevant. LinkedIn has targeting capabilities so that you can reach the folks you're looking for by job title, industry, company, and more filters. So you can stand out with LinkedIn ads and start converting your B2B audience into high quality leads today. LinkedIn ads will let you build the right relationships, drive results, and reach your customers in an environment where a lot of your customers are hanging out because you will have direct access to and network with
decision makers, a billion members, 130 million decision makers, and 10 million C-level executives. You can start converting your B2B audience into high quality leads today. We will even give you a $100 credit on your next campaign. Go to linkedin.com slash TED audio to claim your credit. That's linkedin.com slash TED audio. Terms and conditions apply. LinkedIn, the place to be, to be.
Canva presents a work love story like no other. Meet Productivity. She's all business. The Canva doc is done. Creativity is more of a free thinker. Whiteboard brainstorm. They're worlds apart, but sometimes opposites attract. Thanks to Canva.
The data is in the deck. And now it's an animated graph. Canva, where productivity meets creativity. Now showing on computer screens everywhere. Love your work at Canva.com.
Support comes from Zuckerman Spader. Through nearly five decades of taking on high-stakes legal matters, Zuckerman Spader is recognized nationally as a premier litigation and investigations firm. Their lawyers routinely represent individuals, organizations, and law firms in business disputes, government, and internal investigations, and at trial, when the lawyer you choose matters most. Online at Zuckerman.com.
Usually, the TED AI Show comes out every Tuesday, but we're dropping this episode a little off schedule because something of global significance has happened, and we wanted to be one of the first to share it with you. So without further ado, let's get right into it. Imagine a world controlled not by nations, but by a handful of tech companies, driven primarily by profits, holding the reins of artificial intelligence. A technopolar world, as geopolitical expert Ian Bremmer calls it.
where algorithms shape our lives, our economies, and even our wars. Now, this isn't some sci-fi movie. It's the future we're barreling towards right now. And the question is, who gets to write the rules? Well, now the UN has stepped into the ring with a bold plan, global AI governance. They've released this plan on September 19th, 2024. And we're giving you the scoop right here.
I had a chance to read an advanced copy of the report, and it gave me a lot to think about. So, this plan is a set of guardrails the UN wants to put in place for all sovereign nations to adhere to, just like what we do around nuclear power or aviation.
Only this time, the stakes are even trickier. Because it's AI, right? It's a domain tightly controlled by tech giants. It's an ever-evolving technology that even the most seasoned researchers are still trying to understand. It's a tool that has the potential to both solve humanity's biggest challenges and unleash worldwide chaos. In other words, do global rules of conduct for such a powerful, young technology, ruled by tech alliances, even apply...
And is the UN, an almost 80-year-old bureaucracy consisting of sovereign nations, the right institution for the job? I'm Bilal Volsadu, and this is the TED AI Show, where we figure out how to live and thrive in a world where AI is changing everything. Today, we're joined by Ian Bremmer.
an American political scientist and one of the co-authors of this report. He's not just a policy wonk, but someone who's in the room with world leaders, feeling the pulse of this tech revolution. He's explored the very question of governance in the digital age and its shift in power from sovereignties to big tech. And he believes the UN is the only international body in a position to hammer out a credible global consensus for this world-changing technology.
And hey, y'all are going to want to buckle up for this one, because this is a ride into the future we're all building, whether we're ready or not. So, hey, Ian, welcome to the show. Thanks, Bilal. Looking forward to it. Cool. So this report comes at a pivotal time. AI is obviously no longer this futuristic concept that we hear about in movies. It is certainly in the zeitgeist. But I have to ask, what prompted the UN to take the step towards global AI governance at this specific moment in time?
I've been talking with the Secretary General about this issue for more than five years now. He recognized when he became Secretary General, Antonio Guterres, that there were two global issues where governance was wholly inadequate, that he needed to make utter priorities of his time. One being climate, where it's happening, it's just happening way too slow.
and with far too limited resources for the majority of people on the planet.
And second, disruptive technologies, notably artificial intelligence, that was coming way too fast. And there was a complete absence of governance. The reason it took so long for it to happen was in part because the UN is a bureaucratic creature and it takes a while to get things set up. But secondly, because they didn't have a senior envoy for geotechnology in place, which
But having said that, this has always been a top priority for the secretary general. And the fact that you then had billions of dollars being poured into this technology with not just enormous hype, but real excitement around real use cases is
Changing the way humans interact with each other, changing the way we innovate, changing the way society and the global economy works meant that an absence of global governance was going to leave a lot on the table and was also likely to get us in real trouble. So the urgency has clearly stepped up in the past, I'm going to say, 18 months and
And in that regard, I think that this report is landing at a really critical time. Yeah.
Yeah, one might even say it's coming at just the right time because you're totally right, especially let's call it the chat GPT moment or the large language model moment where certainly ML has been on the steady trajectory. But really, you know, the fact that the average person in the world can now start like experiencing this technology seems like a more recent phenomenon from machines sort of that can just understand to machines that can create technology.
And so along with that sort of this bucket of technologies that requires governance, you know, the report points out that a shared understanding of the opportunities and risks is key. If you had to distill down for our audience just a couple of the key points in terms of what the opportunities and risks are when it comes to this technology, what would they be? Well, the opportunities...
You mentioned that the average person can now see, can now use artificial intelligence in a way that it affects their daily lives. They will engage with chatbots. They will use creative AI tools. They'll create art with it. They'll create videos with it. They'll edit with it. They will brainstorm with it. Anyone that has access to a smartphone or a computer now has access to AI in at least a limited way.
that is transformative as a tool. It's a helpmate and it's about to become vastly more than that. In some cases, it's a friend, it's a counselor, it can be an advisor. Those are the opportunities for individual human beings. But then you have the industrial, the corporate, the enterprise use cases. You get
innovation, you get new vaccines, you get new batteries, you get, you know, more efficient ways of, you know, stacking traffic, you get all of that kind of thing. And I mean, I've spoken with so many CEOs in global pharma, energy, retail, transport, who are talking to me today about
this week, this month, about the billions of dollars that they are already saving from deploying AI in their industrial use cases. So the opportunities, in my view, this is not a technology that is being overly hyped. In my view, these are technologies that are still radically underappreciated.
Because most people don't yet know or see all the ways that they are about to be deployed and are already being deployed. So that's the first point. The second point is on the risk side is that all of the companies, and they're mostly companies, mostly not governments, that are driving the investment in AI are capitalists when it comes to their profits.
but they are socialists when it comes to any losses or costs that come from their products.
And I don't know about you, but I would like them to be capitalists for both profits and losses. That's how we got ourselves in trouble on climate change, is we had large numbers of industry and governments aligned with that industry that was very happy to be capitalists in making lots of money from exploiting the natural wealth that exists in the planet, but
And we're very happy to push the losses off to be everyone's problem, which means the global South, your children, the poor. And that's not capitalism. That is rapacious oligarchy. And so what we need is to ensure that the negative externalities that come from deploying A.I.,
Like any jobs that are lost, like skill sets that need to be redeployed, like disinformation and the use of AI by malevolent actors or tinkerers in ways that are dangerous for our economy or national security.
And also to make sure that the opportunities generated by AI are not simply taken advantage of by small numbers of comparatively wealthy people who have the market serve them most effectively because they have the deep pockets. And those risks will grow if they are not addressed. And they will not be addressed if the marketplace is allowed to capture the governance model. Right?
Remember, a functioning free market economy is one where corporations compete against each other in a level playing field with governments and regulators that act as independent and fair arbiters.
And that is not the way the American economic system runs. And that's not the way the global economic system runs. And that's particularly dangerous when you're deploying new technologies where there's an absence, a complete absence of regulation. That is, I think, the context where what the United Nations is presently trying to do is operating it.
You know, I think you very nicely outlined why this is sort of a double-edged sword, right? You're making a brilliant point, which is that we're hearing about these large language models that are largely trained off of publicly available data, but all the latent value that is sitting in all these enterprises and even governments across the world, I mean, we haven't yet begun to see the economic impact that's going to have. At the same time, yes, this technology could absolutely obliterate the world a bit, so to speak. And the climate analogy is a very interesting one, too.
I want to do two things. I want to talk about the players here, and then I want to talk about incentives. You've stated in the past that we're headed towards this world that you call a technopolar world, right, where these large tech companies are effectively sovereign in this digital world that we're talking about. You know, yet it's important for incumbent sovereign powers to govern both of these digital and physical worlds that we spend so much time in and that are getting more and more interconnected. Could you break down this shift in sort of power dynamics that is taking place
and how AI plays into all of it. Sure. I mean, when you think about artificial intelligence, first of all, the...
Bots and the models that are being created, that we're all using, they are being developed largely, overwhelmingly by private sector corporations who have business models and respond to market forces. And they are primarily incented by profitability and to be able to outcompete other companies in this space.
that will determine what they create, unless and until they are incented in other non-market ways by governments and by other actors. And overwhelmingly, in the first few years, couple years of the explosion of AI, the governments are playing catch up. There's no global governance model. It's overwhelmingly just a small number of advanced industrial economies. And further, a lot of it is catching up.
to the state of where AI is today, where you have no governance at all, as opposed to thinking ahead to where AI is likely to be in three or five years, which is a dramatically different and more capable place. And the fact that the models, the data, the priorities, the incentives, the actions are being determined by the companies as opposed to the governments is
create an environment which is technopolar. In other words, when you talk about the AI environment, at least today in 2024, you're really not talking about an environment where governments are sovereign. Now, I mean, ultimately, of course, governments are
have the ability to enforce laws. And these companies don't exist in the virtual world. They exist in localities. They have bank accounts and they're not, you know, just all on crypto off the grid. And they live in houses and they have citizenship. And so they absolutely can be governed. But the de facto reality is
is that the decisions that are being made about what AI is and what is done with it are being made overwhelmingly by the technologists, by a small number of the technologists. So they're the ones that are deciding. And that means a couple of things. It means, first of all, if you are a government actor or a multilateral organization comprised of government actors and you want to create effective governance, then
You actually need to work with the private sector. You can't do this by yourself because you don't have the knowledge. You won't move fast enough. You don't understand what needs to be done. They're the ones doing it. And if you don't do that, they will de facto regulate themselves. And that is a challenging thing because their interests are in treating citizens at best as consumers and at worst as products.
It didn't work out too well for social media, did it? Not as citizens. And governments care at best in people as citizens. So, I mean, if the government's going to work for the people, the government has to work with the private sector on AI. That is the reality of where we are today. And what we have certainly learned today
with a lot of humility at the United Nations is that if you want to start governing something effectively, you first need to define it. You need to understand it. You need a common understanding of what AI is, where it's going, both the opportunities and the concerns. And, you know, this is where the UN was so effective on climate change. And it took a long time
But we're now in a situation where almost every single member state, no matter if they're poor or rich, small or big, a dictatorship or a democracy or something in between, they all agree that we've had 1.2 centigrade of climate change.
They all agree on how much carbon and methane is in the atmosphere. They all agree on the implications of that for extreme weather conditions and loss of biodiversity, impact of plastics in the oceans, you name it, right? In other words, all of the negative externalities from globalization, the world accrues.
agrees on that. Now, they don't agree on what to do about it. They don't agree on the resources. But my God, if you can agree on the problem and the opportunity, then when you have resources, you'll deploy them more rationally. When you have conversations about global governance, it's much more likely to move in the right direction. It's so much easier to harvest cooperation as opposed to zero-sum conflict when you all agree on what the challenge is. So we are now at the
point in the United Nations on global governance, not where we're calling for, we've got to create an authority that has the arm to compel behavior. It's not that. No, we're at the point where we want to commonly define
the opportunities and the risks that come from the state of play of artificial intelligence right now and going forward. Because doing that will give us the tools we need to govern.
I think that is very important for technology as nebulously defined and many associated technologies that sort of need to be taken in concert to govern this effectively. And of course, a central point in what you're saying seems to be that we cannot leave this to market forces, right, to influence the development and deployment of this technology. We know exactly how that went with social media, not super well for the end user, maybe not even that well for society at whole. Oh, clearly.
clearly horribly for society as a whole, right? I mean, social media as it stands right now is undermining the civic architecture of our societies. And that is not the intention of social media. It is purely an unintended but logical consequence of the business model.
Oh, 100%. If you're optimizing for engagements and that's the dial you're moving things towards, funny things will happen. Now, AI, again, presents its own challenges, and you're kind of alluding to this. So I have to ask, what should be the guiding principles for technology that, as the report states, is one...
non-explainable by design. Number two, isn't still fully understood by experts. Number three, is borderless by nature. How on earth do we govern a technology like this at the global level? Well, I think that what you want from a technology like this
In addition to having the world come together to define the state of climate today and how it's been changed by humanity, the world has also come together to define. We have a universal declaration of human rights and we have sustainable development goals. The world agrees that people should have the opportunity to engage in productive labor.
The world agrees that people should have adequate shelter. The world agrees that people should have adequate food. The world agrees that people should have adequate water.
World agrees people should have access to health care world. I mean, these are basic things. And yet the world agrees that there should be goals for the eight billion plus people on the planet. And we should orient towards those goals. So the baseline for artificial intelligence and governance should be how do you help use and deploy these technologies in service to
of those goals. There's nothing really bigger than that, right? It's not more complicated than that. Right now, we are not actually on a path to achieving those goals. In fact, since the pandemic, we've actually backslid on things like hunger and on forced migration. AI is a singular tool
available to humanity that could help, I think will help humanity achieve those goals if it is deployed in service of those goals. There is nothing more noble and nothing more sustainable for our planet than to accomplish that.
And so I think those are your principles. And I think the goals of the UN high level panel on artificial intelligence governance should be in service of that. It really is a question of objective functions. And what you're saying is, hey, there's already global consensus on these things that we want to achieve. You know, Agenda 2030, the sustainable development goals. Let's
put this new technology to bear on, you know, making a huge dent in all of those goals as quickly as possible. And by the way, I think it's easier than climate in a fundamental way. And it makes me very excited, which is climate change. There were an awful lot of very rich, very powerful actors, governmental, individual, corporate banking that really saw climate
that addressing the climate agenda, nevermind advancing it, just addressing it was existentially threatening to their wellbeing. That is absolutely not true of AI.
You do not have people in industry out there that are saying, I need to pull the wool over collective humanity's eyes and not have AI used in service of humanity. It's not that. It's actually that if these people focus purely on the business models and the market agenda, they're just not going to get to.
all of the other opportunities for AI because it's not a priority. So they won't bother with the global South. They won't bother with a lot of the poor. They won't bother as much with biodiversity because there's so much low hanging fruit for them to make, you know, working with the fortune 500 and working with, you know, the deep pocketed in the Gulf and Europe and the United States. And so the point is, if you just create that knowledge,
If you create those opportunities, if you let everyone know, give you another point, Bill Wall. If you talk to the owners of these AI companies, the technologists, the CEOs, the founders, the shareholders, you know, they all are looking for good things to do.
They want it. It's not like they're not charitable people. Right. Totally. Altruism is a huge part of the culture in many of these companies, too. It really is. But they don't necessarily know what the right thing is to do. So if you don't have a global conversation with common standards and with a common global scientific and policy understanding of here are the ways that I can have the greatest impact, then you are going to waste so much money.
You're going to waste so much good effort. Instead of having AI for humanity, you're going to have AI washing. You know, you're going to have people just saying, hey, here's what I'm doing with AI and just make sure that at least they have a good story to tell. I'm not interested in giving people good stories to tell. Right. So this is actually this is something that is completely doable and it's not a collective action problem. It is just a coordination issue.
I love that. Yeah, it's like the saying in Silicon Valley, move metrics that matter. If you can't define the metrics to move, everyone's just going to be running in conflicting directions. Exactly. And so by having sort of a common set of goals to move towards, I think that could be extremely, extremely impactful. You brought up the global south and the report says,
Right.
What specific measures are being proposed to ensure this equitable access to AI technologies and to prevent this widening of the digital divide? There's so many languages that are already underrepresented in the modern large language models, but there's so much more than that, too, right? Like, how the heck do you even get access to the NVIDIA GPUs that you need to train or fine tune your models? So how is that going to work?
Well, I mean, first of all, it's going to start slowly and then hopefully it picks up. So, you know, you've got two concrete recommendations that we hope to be taken up by the member states. One is an AI capacity development network.
And that idea is to have UN-affiliated development centers that make expertise compute and AI training data available. And, you know, that means, you know, you're going to have sandboxes to test AI.
AI solutions and learn by doing. It means you have educational opportunities and resources for university students in the global south and programs for researchers, social entrepreneurs and training public sector officials, which they generally don't have right now.
you know, a fellowship program, you know, for individuals to spend time from the global south into top programs working on AI in the developed world. These things have to be created. There's a second recommendation, which is connected to that, which is a global fund for AI. In other words, that you end up raising the money from the private and the public sector that would support this development capacity.
Now, look, I'm not suggesting that this is easy or done overnight. And when you have half of Africa that doesn't even have access to electricity, capacity development for their data isn't going to get them on AI, right? So there are big issues out there that have to be addressed.
But this is a unique tool. And the idea is that every million dollars that goes into this global fund is going to be oriented in a way that the world believes is most effective to get AI to be used by the world for the development of humanity. It's inconceivable that an individual corporation would try to do this or government would try to do this by themselves.
It's well said. And I think you also make the point that the incentives are aligned, which is often not the case. You are basically by providing this baseline level of compute and abilities for these markets, you're basically unlocking markets for these companies as well. And so I think the incentives there are quite well aligned. I have to ask one question related to that, which is the sort of common pushback in Silicon Valley when it comes to all of this stuff, which is
you know, especially AI companies, particularly in Europe, sort of feel that the existing regulatory landscape, right, GDPR and so forth, DMA, are already so restrictive and it's the large companies that benefit from this. And hey, what's going to happen to all the smaller companies and the smaller players? How do you sort of balance consumer protection while also enabling innovation? How do you think about that? I do fear that at the platform level,
the amount of compute that is required, the energy to service that compute, the water to service that compute. I mean, you're talking about a small number of corporations and governments that are capable of doing that.
And the data that they have available to them allows them to identify startup corporations very quickly that they can buy, support, or kill.
And we've seen that historically with Facebook slash meta. We've seen it with a lot of organizations. And we're seeing it play out right now. Some of the startups that raise hundreds of millions of dollars are like, we just can't keep paying this compute cost. They can't keep paying the compute cost. So, look, I mean, it's certainly there's an argument to be made that maybe if AI models go open source.
that it's possible that you end up with a whole bunch of players that are using that AI in their own local applications, their own central applications that make a big difference. You wouldn't make most people in AI would not make a strong bet that that's where AI is going.
Right. I mean, that is making that model, making that bet. But most of the money is not heading in that direction. Certainly, that's not true in China, which we haven't talked about as a wholly different kettle of fish because it's largely self-contained. But on the industrial side, what China does is going to be extremely important. Look, I mean, I could answer this question in a more dystopian way, which is for most of my life, the reason that state capitalism was a failure was
is because you have kleptocracy, inefficiency, nepotism that makes those companies make very bad decisions. But it might well be
that when AI is deployed in real time with access to massive data sets, that AI making decisions at the enterprise level to drive corporate growth might prove to be more efficient than what the private sector can do by itself.
And if that is true, then you will have a very strong anti-competitive impulse because suddenly economies of scale together with efficiencies driven by AI means that the best possible outcome you can have economically is non-competitive.
But if that's where we're heading, then the role of effective regulation to ensure that individuals have rights that are safeguarded becomes far more important, right? Because people and consumers no longer have the ability to choose A as opposed to B, right? They're going to be in an environment, an AI environment that...
Controls everything that has, you know, like a fish in water. Right. Exactly. I mean, it'll be like, I mean, you can it'll get I am of the view that consumers within five years will probably use.
an AI ecosystem for an incredible amount of social, business, educational, health, and other relationships. And they'll probably all be the same AI.
Because there'll be advantages in having that data set together in one place. And you'll have preferences depending on your economic well-being and what kind of service you want and everything else. Also your geography. Okay. But if that's the case, there would be real costs in you switching. You already feel this. Like, I mean, if you're on Twitter and you've spent like a bunch of years building up your following and your connections and suddenly you want to go off, like you're the
You're locked in. You're locked in. You have no rights. If that's where we're heading on the consumer side, and if on the industrial side, we end up with, again,
small number of players that have extraordinary amounts of data that are able to drive the innovation because they have the data to compute the energy, then the role of effective governance and regulation at the national and at the global level becomes far more urgent. So I am not answering your question well, because your question was like, well, what happens to the small companies?
I'm like, I'm more concerned if the small companies end up dying or you can have a whole bunch of small companies, but as they become big enough to be noticed, they have to align. What happens to the small creators? They can all create. But if you become big enough that you make a move, you've got to sign up with a record company. You've got to sign up with like, you've got to be a Facebook influencer, Instagram, whatever it is, right? I mean, I'm saying I could see that happening in every sector with every corporation and startup. But if that happens-
then we've got to spend a lot more attention making sure that individuals still have these rights. You're bringing up this point where the current paradigm, at least that we're experiencing, it does seem to be that the companies that, you know, control large amounts of compute, large amounts of data are going to be at a huge advantage. They're going to be creating these products and experiences that really mediate our interactions with the digital world and increasingly the physical world, too.
And so, you know, in this context, open source is brought up as this sort of counterbalance, if you will. And it's a very hotly debated topic, too, because there's a spectrum of sort of open source and closed source, right? Like, let's be honest, most of the open source models in the wild today are more like open weights versus fully open source models where you have access to the training data and the source code and all that good stuff. Right.
The report advocates for this stance of meaningful openness. Can you just elaborate on that? It means that you are hoping that people, governments and organizations around the world aren't purely takers.
of AI models, values, preferences, and products that come from the biggest global platforms in the West and in China. That you hope that there is meaningful openness, enough openness that individuals and enterprises and governments will be able to actually deploy AI in meaningful ways for their benefit over time. That is the hope.
That's what you're calling for. But there's no power behind that. There's no money. There's no authority behind that. That's just the collective view of 39 people around the world that happen to have various expertise on AI. And it's something that was fairly easy to agree on. Now, I will say, because I'm agnostic as to which way this is going to go, as opposed to which way I would like to see it go,
I mean, analysis is not preference here, right? And that's a very important thing to keep in mind when you're talking about these issues. There are different risks that come from these models. So if it turns out
that overwhelmingly we have closed models and a very small number of companies maybe that need to be aligned with either the United States or China that end up dominating the AI space, then the critical governance framework you will need for national security and sustainability is a series of arms control agreements in AI between the Americans, the Chinese, and relevant private sector actors.
like what you had between the Americans and the Soviets. But we only got that after the 1962 Cuban Missile Crisis, when we almost blew ourselves up. That's a serious problem, right? Because in the first 15 years...
The Americans and the Soviets are developing A-bombs and then H-bombs and then missiles to launch them and all of this capacity. And we're going as fast as we can to assert dominance. And the Soviets as fast as they can to undermine that dominance. And, you know, you don't want to talk to your enemy and let them know what you're doing because, I mean, that's just going to give them information. It's going to constrain you. Well, no, it turns out you really need to have those agreements because otherwise you will blow yourselves up.
And so that is essential, I would say, for the next U.S. administration. Secondarily, if it turns out that actually the drivers of cutting edge AI turns out more like not just meaningful openness, but radical openness.
where almost anybody in the world, good actor, bad actor, in between, tinkerer, can take AI models and deploy it for their own purposes. That means that the future of AI technology is going to be more like the global financial marketplace. It'll be systemically important.
Everyone will use it and everyone will need it for their own purposes. But also almost anyone could be a systemic threat to it. Right. And could bring it down. Like kind of like, you know, you've got a couple of Yahoo's on Reddit talking about GameStop. And before you know it, you've got like a challenge for the global markets. Right. And so then what you need is.
is a geotechnology stability board, something like the Financial Stability Board that sees that, okay, we are actually not politicized. We're independent technocrats with an intention of identifying potential threats to the AI marketplace, ensuring resilience,
and responding to crises, ringing the bell and saying we've got a crisis, and then responding to them collectively as soon as humanly possible. That's a radically different kind of governance than the U.S.-China arms control governance. And we presently need to start thinking about both because we're not actually sure which
sort of methodology, if you will, is going to be most successful in how AI will be deployed over the coming years. We just don't know.
I like the finance analogy a lot. Like you almost want these parties to sort of be regardless of sort of, you know, military or diplomatic ties that open lines of communication are there. Because if you've got this interconnected web of open source AI running the world, if you will, it could very well be something where traditional diplomatic channels might be far too low bandwidth or almost stifling to respond as quickly as possible.
Well, it's funny. I'm sure you read Ministry of the Future by Kim Stanley Robinson. But I will. The book. Okay. So it's interesting. It's a book on climate.
And it's about, you know, kind of medium term, somewhat dystopian, you know, massive wet bulb climate incident. Millions die in India, lots of terrorism, eco-terrorism, and need to have global technocratic independent governance over carbon emissions and related things because the world just has failed, failed, failed, right? Fascinating concept, right?
But probably not necessary for climate, in part because the money is coming. The technology is there at scale. We are moving to a post-carbon environment that is, you know, I mean, it's later than you'd like it to be with a lot of costs. But nonetheless, it's going to happen in the next two generations.
AI may well require a ministry of the future. You may need, like you have central bankers today, you may need to have ministers of AI play that kind of a role.
who are truly like in their, their appointed, their, their parts of national governments, but they act in a politically independent way. They're not driven by ideology. They're not driven by political cycle. They're driven to ensure that the system globally continues to work. And when you have a major financial crisis, crisis, you know, the China, the head of the people's bank of China, uh,
And the head of the Fed have the same tools. You've got fiscal tools, you've got monetary tools, and you've got the same playbook of
You get the same definitions and you all want to get the markets working and you really want to avoid a global depression, right? I could easily see that being what the future of AI needs to be like, but it's much harder. It's much harder because the marketplace, you know, is dealing with, you know, exposures and movements of, you know, massive amounts of money. But AI, it's not just about, you know, sort of digital equivalent of money, right? It's like, it's everything, right?
It's national security. It's an election. It's bio. It's, you know, it's drones. It's it's anything. So it is like the the central bank issue, but it would be much more powerful. And, you know, for people that are thinking about artificial general intelligence, you
I mean, the reality is that, you know, humanity could probably never handle artificial general intelligence unless you had vastly more powerful governance to to deal with that and and governance at a technocratic and global level.
This episode is brought to you by Progressive Insurance. You chose to hit play on this podcast today. Smart choice. Make another smart choice with AutoQuote Explorer to compare rates from multiple car insurance companies all at once. Try it at Progressive.com. Progressive Casualty Insurance Company and Affiliates, not available in all states or situations. Prices vary based on how you buy.
Hi, I'm Bilawal Sadu, host of TED's newest podcast, The TED AI Show, where I talk with the world's leading experts, artists, journalists, to help you live and thrive in a world where AI is changing everything. I'm stoked to be working with IBM, our official sponsor for this episode. In a recent report published by the IBM Institute of Business Value, among those surveyed, one in three companies pause an AI use case after the pilot phase.
And we've all been there, right? You get hyped about the possibilities of AI, spin up a bunch of these pilot projects, and then crickets. Those pilots are trapped in silos. Your resources are exhausted and scaling feels daunting. What if instead of hundreds of pilots, you had a holistic strategy that's built to scale? That's what IBM can help with. They
They have 65,000 consultants with generative AI expertise who can help you design, integrate, and optimize AI solutions. Learn more at ibm.com slash consulting. Because using AI is cool, but scaling AI across your business, that's the next level.
I have to ask about, you know, you asked about the China question. Like there's this ambitious vision for global AI governance that's been outlined here. A common critique of the UN tends to be, how are you going to enforce these guidelines, especially with, you know, major tech companies that might play along right now, but at some point might want to assert their own independence or countries like China that might be reluctant to comply?
Well, that's why I use the climate change example is you did not start with compliance. You started with defining the challenge, the opportunity and the risk. So you had an intergovernmental panel on climate change. And that's what
predated all of the, you know, committed, even if non-binding, you know, carbon reductions and targets and everything else. First, you had to have a whole bunch of countries that all agreed these are what the opportunities and challenges are. And that's what we are starting with. We are starting with an international scientific panel on A.I.,
just like the Intergovernmental Panel on Climate Change, which is meant to regularly convene and define the capabilities, the opportunities, the risks, the uncertainties, the gaps in scientific consensus on AI development and trends. And the point is, if you get the world to agree on that,
and exchange standards and talk about opportunities and align with sustainable development goals, well, then clearly you'll have your annual summits on AI and you'll get governments to say, okay, let's apply some resources to this and let's make some commitments on this. And you'll have some countries that'll want to be leaders on it and they will. And they'll push, they'll prod, they'll try to get other countries to come on board. Now, the latter is much harder and much slower than the former, but you can't do the latter without the former.
My view, which I think has been one that has been shared really by consensus, is the purpose of this group is to help the world define this opportunity,
And this challenge, and by the way, the opportunity is bigger than the challenge, right? We truly think of it that way. And, and that, that is, this is not coming out there saying we want to have enforceable governance power over what technology companies do and don't do. That was never going to be in the cards, but also it would be premature if you don't actually know what you're trying to accomplish. Right.
So don't
Don't put the wagon before the horse. Get the problem right first. Define it. It makes a ton of sense, right? Like if you're not going to survey the landscape and build a common map, you're obviously going to have disagreements from the get-go. But what about more challenging topics? Like, for instance, in light of the increasing use of AI in both the Ukraine-Russian conflict, right, and the Israel-Gaza war, what position does the report take on developing international norms or regulations that
specifically addressing the use of AI in warfare. Well, I mean, there's already been efforts at the United Nations to ban lethal autonomous drones and artificial intelligence in those areas where national security is directly in the target. You know, you also have the beginnings of the U.S., China and South Korea talking about the nuclear decision making process, for example.
My view is that governments, a small number of governments, are probably going to be necessary to drive that and not the U.N., even though the U.N. can have a say, because ultimately this will need to be policed.
And the way you police is by having a willingness to deter and some kind of actual enforcement, sanctioning enforcement. There are going to be costs. I mean, the Iranians, you know, are not standing up to the non-proliferation treaty. If they're not developing nukes, it's because they're concerned that the Israelis are going to blow them up with UF support if they actually decide to go full, full nuclear breakout.
And that I think when it comes to using AI in these weapons systems, when you are talking about life and death for the Russians and Ukrainians on the ground, and they have access to those tools to make into weapons, they will make them into weapons.
So you're going to have to have the most powerful countries in the world be willing not just to opine, but to govern, to exert force, to ensure that AI is not used by itself in targeting decisions, is not used by
by itself in war fighting that takes decisions in life of death completely out of the loop. We've already seen what happens as war becomes more and more distant from humanity is that it becomes much easier for people to pull the trigger and much easier for wars to escalate. And that's not where we want to be.
Are you personally more worried about digital attacks or political interference, you know, via the Internet or sort of these physical manifestations of AI systems when it comes to like drone warfare and things like that? I'm much more worried about the vulnerability of economic systems to AI manipulation. I'd be much more worried about financial markets crash or critical infrastructure crash, especially when you already have antagonists that are deeply inside other systems.
And just sitting there, in other words, a crowd strike type situation. But instead of a mistake, a malevolent such hit. I mean, you saw what happened to, you know, sort of, I mean, cost billions of dollars immediately and shut down all sorts of systems, computers, you know, air travel, you name it, you know, around the West. And that was just a mistake that, you know, just a mistaken patch.
What happens if that was done intentionally by an adversary, whether a rogue actor or by a government? I think that's a much bigger concern, but that's not my biggest concern. My biggest concern is something actually much more prosaic. What is it? My biggest concern is that human beings are programmable. We are very susceptible to algorithms. And, you know, when I was growing up,
It was all about nature and nurture. It was about what your genetics say and then how that potentiality is shaped by human beings around you, individual human beings and collections of human beings. That's now shifting to algorithms. Social media is a part of this problem, but when it's AI and when human beings are developing principle relationships,
algorithmically and they're no longer engaging principally with other human beings. So nurture is being replaced by algorithm and by AI
I worry that we will lose our humanity. I think that's a much more, for me, it's a much more profound, it's a much more philosophical issue, but it's also a really real near-term threat. You know, we may develop AGI in 20 years, but humans may become algorithmic in five. And I think we should resist that. I think that at the very least, we need a lot of testing to
to understand the implications before we want to run real-time experiments on humanity. And right now, what we're doing is running real-time experiments, right? We need a smaller sandbox. I think you're totally right. It goes back to technology mediating our interactions in the physical and digital world. That's right. And that gives us sort of the surface area for us to be impacted and influenced is greater than ever before. I'd argue this is already the case today. It is. That's incredible.
in part why online advertising is just so darn effective. That's why mimetic warfare is a thing, right? Yeah, but your relationships still are principally with people. Mm-hmm. Right? Oh, yeah. Now, once we have intelligent entities that are sort of talking to us and...
Oh, good Lord. Right. I mean, we've already, you know, basically seen AI pass the Turing test in the last year. And now we have a bunch of people in AI that tell us that that's not that important. This is moving the goalpost. See, it's moving the goalpost. I think actually it is essential for what matters because what matters for us, at least, is humanity.
And so if we can no longer tell the difference between what is and is not a human being, and we start engaging more principally with non-human actors in service of other goals that have nothing to do with human beings, then we have just unwillingly, unknowingly given up something essential. I'm deeply uncomfortable with that.
Once this report is made public, you're obviously going to get a huge amount of feedback from various stakeholders, right? What would you like the audience that's listening to this to do? Like, should they engage with this report? And how should they give their feedback to what y'all are proposing? I think they should certainly talk about the fact that AI is a tool that needs to be used for all of us. You know, if there's any real message here,
It's that we are turning much more inwards at a nation by nation level. And we're not thinking about common collective humanity. We have, we are now inventing tools that allow us to improve ourselves together and to redress a lot of the damage that we've done to the planet. I mean, if I, if I look at young people today and,
We need to be honest with them. You know, we've not been effective stewards of the planet and we need them to make better decisions than we have made. I would argue this report has been an effort over the last year to try to give young people an opportunity to make better decisions. Right. And and that's the way I want people to respond to this report.
Awesome. Ian, thank you so much for joining us. My pleasure. Okay. Ian's insights are a wake-up call. AI isn't just about code and algorithms. It's a global challenge that demands global cooperation.
just like climate change or the interconnected web of financial markets. I mean, think about it. When a financial system teeters on the brink, central bankers don't care about borders or political alliances. They pick up the phone because their fates are intertwined. Their incentives align to prevent disaster. That's the kind of urgency and cooperation we need for AI.
What gives me hope is that this isn't some impossible challenge. We have achieved this level of cooperation before. Now, the question is, can we replicate it for AI before it's too late? Will we see a world dominated by a few tech giants hoarding the power of AI? Or can we foster a scenario where a thousand flowers bloom, where AI empowers individuals and nations across the globe in a truly interconnected yet decentralized AI ecosystem?
This conversation with Yin has been a reality check. AI isn't just about futuristic hype. It's about the choices we make today. This UN report is a vital blueprint to build a shared understanding on what exactly it is that we're trying to regulate.
But the real work begins with us understanding the complexities, demanding transparency and pushing for responsible AI development. And doing that without stifling innovation and nimble startups. The future of AI is not predetermined. It's an interconnected tapestry and a global story we're all writing together.
The TED AI Show is a part of the TED Audio Collective and is produced by TED with Cosmic Standard. Our producers are Ben Montoya and Alex Higgins. Our editors are Banban Chang and Alejandra Salazar. Our showrunner is Ivana Tucker and our engineer is Asia Pilar Simpson. Our technical director is Jacob Winnick and our executive producer is Eliza Smith.
Our researcher and fact checker is Christian Aparta. And I'm your host, Bilal Velsadu. See y'all in the next one. Support for this podcast comes from Odoo. Imagine relying on a dozen different software programs to run your business, none of which are connected, and each one more expensive and more complicated than the last. It can be pretty stressful. Now imagine Odoo. Odoo has all the programs you'll ever need, and they're all connected on one platform.
Doesn't Odoo sound amazing? Let Odoo harmonize your business with simple, efficient software that can handle everything for a fraction of the price. Sign up today at Odoo.com. That's O-D-O-O dot com. PR.