cover of episode Navigating National Security in the age of AI

Navigating National Security in the age of AI

2024/11/4
logo of podcast The Truth of the Matter

The Truth of the Matter

People
A
Anya Manuel
Topics
Anya Manuel认为人工智能对国家安全最大的威胁在于它与其他技术的交叉,例如网络安全、生化武器和自主武器。她强调了对大型语言模型进行严格安全测试的必要性,并指出美中在人工智能领域的竞争日益激烈,美国需要在人工智能及其相关技术领域保持领先地位,同时也要关注人工智能在选举安全和信息传播方面的负面影响。她还呼吁加强国际合作,以应对人工智能带来的全球性挑战。 Andrew Schwartz主要通过提问的方式引导Anya Manuel阐述其观点,并就人工智能对国家安全的影响、美中科技竞争、国会对人工智能的理解以及未来国家安全格局等方面进行深入探讨。

Deep Dive

Shownotes Transcript

Translations:
中文

I'm Andrew Schwartz, and you're listening to The Truth of the Matter, a podcast by CSIS where we break down the top policy issues of the day and talk with the people that can help us best understand what's really going on.

To get to the truth of the matter about the nexus of artificial intelligence and national security and a terrific new report by the Aspen Strategy Group, Intelligent Defense, Navigating National Security in the Age of AI, we have with us Anya Manuel, who's the Executive Director of the Aspen Strategy Group and Aspen Security Forum. She's also a principal at Rice Hadley Gates and Manuel LLC. Anya, thanks so much for being here today.

Andrew, I'm so happy to be here with you. It's fun to do that. So tell me about this terrific report. It's got people like Joe Nye writing for it, Graham Allison, two of my favorites. Then you've got people like Mark Esper, Chris Coons. It's a bipartisan effort. It's a who's who in national security made up of the members of your Aspen security group.

Tell me, why did you guys do this report in the first place? And what are some of the key... I mean, I know it's a compendium of commentaries, I should say. Why'd you do it? And...

And what are the risks here we're dealing with, with AI and national security? Yeah. Thank you, Andrew. As you know, the Aspen Strategy Group is a group of senior, former, and some current Democratic and Republican U.S. government officials. We get together once a summer, totally behind closed doors, totally off the record, to tackle a very big problem of foreign policy or national security and see if we can't get to something close to a bipartisan answer.

And of course, in the last couple of years, artificial intelligence has exploded onto the scene. I live here in San Francisco. It seems you can't talk about anything else. And while, of course, there are enormous benefits, there are going to be huge efficiency gains in all sorts of industries, creativity grains. My kids are already using it in school. AI is amazing.

But from a national security perspective, as you know well, there is a dark side. And so we spent four days this summer really exploring that intersection. And let me just tell you a little bit about how we saw the intersection between artificial intelligence and national security.

So while we didn't come out with one report that reflects everybody's views, different people wrote different pieces depending on what they found. And I will tell you kind of the themes that came out of this is that the most danger that artificial intelligence poses to our national security is how it intersects with other technologies. For example, cyber. There's a big worry that AI will write malicious code itself. It's already being used to scale up

probe, scale up existing phishing attacks, etc. It can also be used to defend. Two, how it intersects with chemical and biological weapons. AI is like having a PhD student coaching you on your shoulder as you're trying to, as you're a terrorist group or something else, trying to create some of these dangerous weapons.

And of course, autonomous weapons. You're seeing them already in the Ukraine conflict and elsewhere. These are really going to revolutionize the way we think of our national security. You know, in your piece, which was published as an op-ed in the Financial Times, you talk about AI being a severe national security risk.

Is there an urgency that we really need to deal with this right away? Is this future severe? Is this right now severe? And how do you think about mitigating a severe risk like this?

The people I talked to who are developing these language models and who are starting to test them think that the risk to national security could be severe now. So this isn't something where we're waiting for artificial general intelligence. That creates a whole nother ball of wax. And Joshua Bengio and others wrote in our report about that. What I argue for in the Financial Times piece is that

It is time to do very narrowly focused, independent, rigorous AI safety testing only of the largest language models. And in fairness, the UK AI Safety Institute, which was set up in just about a year, it's impressive how fast they got. They really pioneered it, didn't they? They really did. I mean, from Bletchley Park, which was sort of the first conference on this,

a little over a year ago, to setting up the AI Safety Institute. They've been really impressive. And I will say this to credit the labs that are creating the largest language models. Most of them, many of them have voluntarily put their LLMs through this testing. So this is ChatGPT you're talking about, OpenAI. What are some of the other big ones?

Yeah, I'm not sure they've all said who is doing what, but I do think OpenAI and Anthropic definitely have said they're voluntarily allowing model testing and some of the others as well. And so...

You liken this kind of testing to the FDA testing a major drug. Why is it the same thing? Yeah, it's not the same thing. Thank you for asking that. It's a very imperfect analogy, but I was writing an op-ed for a general audience. So the point I was trying to make here is we don't let out dangerous pharmaceuticals to people before we test them. Right. It is not entirely...

clear what the best model here is. What I'm arguing for in this piece is that needs to be mandatory, free deployment, and super narrow. So really only focused on the things that we talked about, you and I before, Andrew, how it intersects with bio, chem, can the model jailbreak itself, those kinds of things.

Eventually, maybe you'll also get to, is there bias, all of these other important issues. But I would argue we should focus right now on what could be the physical harms created by AI. So what are some of those?

They're the ones that I talked about a little bit ago. The idea that it's now much easier with the help of a large language model to create a biological weapon, a chemical weapon, an incredibly advanced malicious code. Those are the kinds of things we're looking for. Of course, one of the undercurrents here is competition and specifically competition between the United States, the West and China on AI.

Where does that stand in your mind and should the United States and others be worried? Yeah, I do think the United States and others are worried. Depending on who you ask, our biggest general purpose large language models are three, six, 12 months ahead of where the Chinese are.

And as you know well, Andrew, the United States is trying to keep ourselves and our friends and allies in the lead by putting export controls on the most advanced compute semiconductor manufacturing equipment. That's all important, and it's done important work. But it only goes so far. The Chinese are excellent entrepreneurs.

And they're really good at the other parts of AI that aren't anywhere near artificial general intelligence. For example, computer vision, surveillance, autonomy, drones. They're likely as good as or ahead of anyone else.

And if you want to really talk about U.S. and China competition, like what about the larger technological race between China and the Western world? Where does it stand and what technologies do you think are the most important? Obviously, chips are at the center of this, but how do you view the competition?

Thanks for zooming out in that way. And you're right, we've been very focused on artificial intelligence. Super important because of course it's like electricity, it's multi-purpose technology, we'll run everything else. Semiconductors, you already mentioned, is the building block of everything else we do in technology. I would add to that list

to technologies where I believe the United States and our friends and allies should stay competitive or even in the lead. 5G and 6G, which run all of communication networks, self-driving cars, the Internet of Things. There, I got to tell you, China is ahead, right? Certain parts of biotechnology, especially where it intersects with bioweapons,

And frankly, some parts of financial technology, fintech, payment systems, the Chinese are incredibly good at that. I would say the United States has done a terrible job regulating blockchain and cryptocurrencies. We should be leading on that. We have amazing innovations and our government has kind of screwed up the regulatory part of it. You can add others, you can add quantum here and there, but it's not all, the important thing here is it's not all parts of technology where we're competing with the Chinese, but

And one really important thing, it often gets talked about as a technology war or tech decoupling. I like to see it more as a big race. So we're already really innovative. Our entrepreneurs are excellent and we need to run faster, more than we need to slow the Chinese down.

I mean, if you talk to many policymakers, and this is across the aisle, and if you certainly talk to folks out in Silicon Valley and San Francisco where you are, they believe that tech is fully the United States' competitive advantage and that we need to take full advantage of our amazing capabilities. How do you balance that with national security? Well, here's one example. We just talked about export controls. So I think the Biden administration is trying to do a very careful, nuanced job of balancing

slowing China down. This is the export controls to a lesser extent, the outbound investment restrictions that you just saw come out a couple of days ago. That's only going to have limited utility, but they're doing what they can.

The other part that they've been focused on is helping our own companies run faster. So in that bucket, I would put the CHIPS Act. And even more important, and it just got none of the press, frankly, the CHIPS Act is famously the $52 billion in subsidies to build some semiconductor fabs here. But it also was supposed to include $173 billion over five years in additional basic research and development funding.

That is the building block of everything that is so important. And frankly, you live in Washington, so you know this. That part didn't have any people lobbying for it. And so that has been chronically underfunded. And I think that is the really key thing that will help keep us in the technological lead.

It's fascinating. One of the pieces in this compendium is David Ignatius, and he says the headline is no Manhattan project for AI, but maybe a Los Alamos. I think that's kind of what you're talking about there in terms of research. A little bit. A little bit. I am talking about that. David, I think, makes also slightly a different point, which is that you need security for some of the largest companies.

labs. And that's difficult because of course you're going to have cyber attacks. Of course, people are going to try to steal that IP, but yeah, you need the basic R and D and frankly you need, and this is a really hard thing. And my friends who are at Stanford working on AI are constantly talking about this. The huge resources that are needed in compute and in electricity to drive all those data centers are

And they're mostly being owned by the private sector and large universities can't keep up. You're not going to have the kind of chip clusters that you have in the private sector. So we need to find hopefully a thoughtful way where maybe universities get to use the private sector ones for a while rather than having to build their own. But we need to find a way to make sure that scientists also have access to this. So as you think through the national security landscape on this,

There is, as you said, a lack of mandatory rigorous testing. The United States government, particularly Capitol Hill, when the internet really exploded and social media exploded, there were these famous hearings on Capitol Hill where members had no idea what was going on with social media. And they were asking questions like, well, if I friend somebody, does that mean they can see all of my stuff, all my pictures? They just didn't know. Chuck Schumer and others have made AI more of a priority.

How do you feel about the level of congressional understanding that goes with all these issues that you're pointing out need regulation, need testing? What is the appetite on Capitol Hill for this? And is it across the aisle? Is it a partisan issue like seemingly everything else? We like to think national security is not.

a partisan issue. Right. Well, Andrew, you're based in Washington, so you may be able to assess that even better than I can. My sense from the conversations I've had on the Hill about this is that it's not particularly partisan, depending on which aspect of AI you're talking about. The idea that

we should probably stay in the lead over the Chinese. The idea that some very limited, depending on who you ask, conservatives obviously want more laissez-faire. I think Democrats would be up for more regulation. There's probably some differences there, but I don't get the sense that this is as horribly partisan as so many other things are.

And what do you predict for the future in the national security community when it comes to this? I know there's a lot of people who are thinking about this. A lot of people working on this. Before Henry Kissinger died on his 100th birthday, I was fortunate to interview him. Yeah. I asked him, I said, Dr. Kissinger, what keeps you up at night these days? And without missing a beat, he said very quickly, yes.

Nations used to battle over territory. In the future, they're going to battle over data. What are some of the future things you see here that keep you up at night? Yeah, that's an interesting way of putting it. So this is a great point that you bring up because I am actually part of a track two dialogue on artificial intelligence between the U.S. and the Chinese.

And that is something that was very dear to Secretary Kissinger's heart. He helped stand it up before he passed away. Two senior gentlemen from the U.S. are continuing that. The Chinese side has been very open. Look, we haven't gotten to anywhere yet, but this is really important. And I think this comes out of what Dr. Kissinger told you.

We need to involve the Chinese in some of these discussions. Not every detail, but I think it was fantastic that at Bletchley, the UK included the Chinese, that there has been some dialogue with China around how you would restrict the use of AI in a military sense. We're not there yet. And people sometimes say, well, hey, we're not there yet. We're never going to get there. And international law is always violated anyways. To them, I say,

After the United States exploded nuclear weapons, we were the first to come out and call for the control of nuclear technology. That was in the late 1940s.

The first real arms control deal wasn't signed until the Kennedy administration in the early 60s. So it took a long time. But in the meantime, there were the Pugwash conferences. There were scientists talking to each other. There were people of goodwill on both sides who wanted to make sure that this technology didn't get out of control. And I think that's a little bit what Dr. Kissinger is talking about. And I hope we can continue that legacy.

It really did keep him up at night. In fact, he and Eric Schmidt were working on another book about AI. And unfortunately, he won't be part of that next generation of thought, but he certainly left us with an incredible legacy.

Anya, of course, we're talking a couple of days out from the presidential election and we're in election season. And, you know, let's face it, we're perpetually in some form of election season. Chris Coons writes about AI deep fakes in your compendium. We know that there's malign actors out there trying to influence our elections in one way or another. How does AI play into elections and election security?

Yeah, that's a very good point. And I'm glad that you pointed out Senator Coons very good article on this. It's dangerous.

AI is making what we were already seeing in terms of election interference, misinformation, disinformation, and it's just supercharging it, just like it's supercharging the efficiency of all sorts of other positive processes. So I worry most, and you and I have, along with most of your listeners, have been following this election closely, that we're getting to a situation where people don't trust anything anymore.

And that's very dangerous. And it worries me when I talk to people who are voting, they don't know who to trust. They trust only the people they care about. It's not very clear that those people necessarily always have accurate information. So I don't have a good solution to that. But boy, this is not a world that we want to continue to live in. We need to get back to facts and truth. And then we can disagree once we have a common basis of fact. I want to finally ask you on this.

What are you guys doing at Aspen? Is this going to be a huge focus of the security forum going forward? Is this something you see for the foreseeable future, all the different machinations of AI and national security? That'll be a really key topic for you all. Yeah.

what I think we are best at at the Aspen Strategy Group and also the Security Forum, which is our public-facing conference, is bringing together people of goodwill and great intelligence who don't necessarily see eye to eye and who have different expertise. And so we did it this summer on artificial intelligence. We gathered senior national security officials

senior people on the cutting edge of AI technology and people in the business sector, a lot of those people didn't know each other and didn't speak each other's language. And so while AI will be one part of the focus, technology will be a larger focus.

But really what we're trying to build here with the security forum and the strategy group is trying to get us back to the idea that we can solve some of these really difficult, thorny problems, technological or otherwise, and we can do it with goodwill and with a minimum of partisanship. Anya, thank you for coming on and sharing the new publication from the Aspen Strategy Group, Intelligent Defense, Navigating National Security in the Age of AI. We really appreciate it.

Thank you, Andrew. If you enjoyed this podcast, check out our larger suite of CSIS podcasts from Into Africa, The Asia Chessboard, China Power, AIDS 2020, The Trade Guys, Smart Women, Smart Power, and more. You can listen to them all on major streaming platforms like iTunes and Spotify. Visit csis.org slash podcasts to see our full catalog.