I'm Andrew Schwartz, and you're listening to The Truth of the Matter, a podcast by CSIS where we break down the top policy issues of the day and talk with the people that can help us best understand what's really going on.
To get to the truth of the matter about the newly released National Security Memorandum on AI, we're doing a special episode. It's actually a crossover with our AI Policy Podcast, and we have none other than Gregory C. Allen with us to talk about it and make sense of this important new memorandum. On October 24th, the Biden administration released its National Security Memorandum on Artificial Intelligence,
This memo fulfilled the administration's October 2023 AI executive order requirement for the development of a national security memorandum. So before we dive into the content, let's go over some of the historical context for this memo. Greg, can you give us some background on what the historical precedent is for a document like this? Andrew, so good to be talking with you again.
I think there's a good reason for starting with historical precedent. And because that's how the Biden administration sees this document is in the light of historical comparisons. So if you think about the history of the Cold War, there's a legendary document, NSC 68, which was really the first statement of the U.S. national security community on what life with nuclear weapons was going to be like for the United States. And that document really has a through line of
of U.S. policy throughout the entire Cold War. There's some other documents in the early days of space technology policy, mostly coming out from the Department of the Army and the Department of the Navy. But those documents really were the sort of baseline policy statement of how these transformative technologies were going to and already had transformed national security and what U.S. policy was going to be about it.
Now, in the case of this document about artificial intelligence, I do think it's worth sort of separating what part of the AI story this is talking about. So, you know, me, I'm in my mid-30s. I'm what passes for an old timer in AI policy these days. And when I started in AI policy, it was really all about machine learning and deep learning, the type of technology that
powers face recognition, the type of technology that powers voice recognition. This document mostly doesn't care about that part of artificial intelligence. This document is all about frontier AI systems. So the absolute most
bleeding edge of the state of the art as it exists today. And also, and perhaps more importantly, the trajectory of where we are going in AI. So think about the best version of ChatGPT. ChatGPT just got a big upgrade not that long ago, and its competitors by Google, Anthropic, and others.
This document sees that technology, frontier AI, increasingly general AI systems, as a transformative national security technology on a par with the nuclear revolution, on a par with the space revolution. And the U.S. government is looking to make big, big moves with that in mind. So, Greg, who is this document actually for? It's primarily unclassified. Why is that?
Well, just going back to my own time in the Department of Defense, there was a unclassified DoD AI strategy and there was also a classified DoD AI strategy. And I will tell you, a lot of people read the unclassified one and even people who had security clearances didn't always know there was a classified one.
Interesting. So even just speaking to your own government, there's a great virtue in having a lot of this be unclassified. But I will say there are multiple audiences that the Biden administration is trying to speak to simultaneously. And you do that when you have an unclassified document. You have a very high profile release for this document. So let's think about who they're talking to. I mean, I can imagine the people in Seattle and in the Bay Area are scouring this thing today. Indeed, indeed. And that's why we have this podcast to help.
them and others. It's their cliff notes. Indeed. So there's multiple audiences to this. I do think the first and foremost audience is the people who are executing the work of the United States government. This is a statement of policy. It gives specific directions. It gives specific tasks that agencies are required to complete by certain deadlines. And this is meant to be the high level guidance for what the U.S. is trying to achieve and the basic steps that they're going to take in order to achieve it. So that's the first audience. The second audience, I think, is U.S. allies.
There have been big moves in AI policy on national security that the Biden administration has already taken. You and I have talked about the October 7th, 2022 AI chip export controls at length. I think a lot of allies were confused by that policy because the policy talked about things that were really happening. You know, China using,
AI chips designed by American companies in their military systems, China using AI chips in their military supercomputers for nuclear weapons modeling.
But that type of thing happens a lot and had been going on in the Obama administration, in the Trump administration, et cetera. And so it didn't necessarily by itself explain, at least to many allies, why there was this big shift in trade and technology posture towards China. And I have known the answer from conversations with government officials. But this is the first time I think we've seen a big alteration.
on-the-record answer from the Biden administration. That policy makes so much more sense when you understand that the Biden administration are true believers on the power of frontier AI. Where we are now and where we're going, when you understand that
They view that technology as absolutely transformative, as on a par with the early days of nuclear, the early days of space. That's when you understand why they thought it was essential to completely rejigger U.S. trade and technology policy towards China and other steps that they have also taken. So I think it's a really helpful document in explaining it to allies in that regard. Okay, so...
Let's talk about this. Wait, there's one more audience we ought to talk about. Who? Adversaries. Sure. China's going to read this document. China is going to pour over this document at length. And I think what's really interesting here is there are some messages. Obviously, they're not aimed at China, but I do think they'll have special resonance with China. So, for example...
Multiple times this document says that we're going to redouble our efforts on counterintelligence, and it is U.S. policy to protect AI industry from their critical intellectual property, from the preservation of their systems. So the type of posture that the United States has historically had towards like critical infrastructure, like our electrical grid, like our water systems, et cetera, this document sort of puts AI technology, the intellectual property, as well as the actual infrastructure
model weights and infrastructure as sort of in that same league. And if I was China, I'd be thinking, whoa, that's a big move. Well, it's really interesting because the United States has never made social media, for instance, a critical infrastructure. Not even close. Right. And so, you know, this falling towards that category is pretty interesting. It's a big move. All right. So let's talk about the national security implications because it is a big move.
How does the administration see AI and its implications for national security? You've touched on this a little bit, but what are they most worried about?
So I think it's actually worth reading from some of the document in this. So we're beginning here in just section one policy where they sort of say what they're up to. Quote, AI has emerged as an era defining technology and has demonstrated significant and growing relevance to national security. The United States must lead the world in the responsible application of AI to appropriate national security functions. And then a little bit further. So what does that mean in practice? Well,
The United States must lead. It is a national security priority to lead. Okay. This is a big deal. Exactly. So think about it. This is a dual-use technology. Most of what's going on in AI right now is not going on in the US national security community. But because the dual-use implications of this technology are so profound for national security, the United States now has it as a
official national security priority to lead in this technology. All right. So give me an example of that. What is a dual use thing that is a real implication for national security? Well, I think what's interesting, this is a national security document. There's an immigration policy section.
There's an industrial policy section. And this is what I mean. When they say that leadership and AI is a national security priority, but it's a dual use technology, then there's all of these sort of policy mechanisms that you wouldn't normally say fall in the national security bucket are suddenly in a national security memorandum. And it's not because the Biden administration is confused. It's because they see these are the stakes we're playing with. And these are the sources of competitive advantage.
And if we're not willing to do what it takes to move on these things, then we're not going to win. I think about, for example, in Washington, D.C., we're constantly talking about the Sputnik moment, right? This period of time when the Soviet Union had a demonstrable edge in space technology and it scared the hell out of basically everyone in the United States of America because they can launch a satellite, they can launch a nuke.
And there was a bunch of really big moves that the U.S. put in place after that. Number one, we created NASA. We created the agency that was the predecessor to DARPA. But my favorite response to Sputnik, and it was absolutely a response to Sputnik, was the National Security Education Act.
where we literally massively increased the number of STEM undergraduate degrees and PhDs funded by the U.S. government, and we overhauled science and technology teaching at the high school level nationwide. And that was a national security policy. And again, that wasn't because people were confused or trying to wrap an education policy that people already wanted. No, they were really scared about the Soviet science technological edge.
And again, I think we're seeing that echo here. So you're saying we might need, as a country, as a government, to fund AI in this way.
Oh, yeah. There are big moves in industrial policy that are anticipated here. So, for example, I had the privilege of attending National Security Advisor Jake Sullivan's speech this morning. And one of the things he said that struck me as just incredibly remarkable is, you know, as we all know, modern AI systems, especially frontier AI systems, are really energy intensive.
When a company like OpenAI or Microsoft or whoever is thinking about building their next data center, they literally put them next to power stations and they have to think about whether or not there's all the electrical connectivity infrastructure. We have a ton of data stations in Northern Virginia. Yes, yes. And what's so interesting is this memorandum directs all the relevant agencies to work together and come up with what they're going to do that's going to make it easier to designate
develop and build new power stations by reducing the regulatory burden, by reducing the permitting type of burden and speeding up. Jake Sullivan said that he thought it was plausible that we were going to need to add hundreds of gigawatts to the U.S. electrical grid.
The entire U.S. electrical grid is right now 1,250 gigawatts. So he's saying, think about five years ago. AI was nothing on the electrical grid five years ago. Now Jake Sullivan is talking about a world where by the end of this decade, AI might be something like 25% of the U.S. electrical grid.
These are very big industrial policy moves that are being considered. But as with the National Security Education Act, these are industrial policy moves being considered with an explicitly national security justification. So it's not only a national security issue, but it's an energy security issue.
issue. Yes, but it's not like the energy security issue that we're familiar with. It's like, are we running out of oil? Can we get oil from countries that we trust? It's an energy security issue that has much more to do with execution and implementation. Can we actually build power plants fast enough? Can we actually build voltage transformers fast enough? Can we actually hook all this stuff up?
Well, good news means there's a lot of jobs associated with this. Construction is a lovely, lovely way to employ a lot of people very quickly. But right now, bringing a new large power plant onto the U.S. electrical grid, that's an undertaking of many, many years. And what the White House is saying with this document is –
Leadership in AI is critical to national security, and we don't have time to do things the way that we've done them before. Just connecting this to some of the moves that you've seen in the corporate sector, I'm sure you saw that Microsoft is reopening Three Mile Island, the nuclear power plant that was shut down after a safety failure in, I think it was the 1970s. I think that that is an important move in terms of the amount of electricity that that
Yeah, what they're saying is that we need a massive amount of electricity. We need clean electricity. Yes. And the cleanest is? Nuclear and renewables. Correct. And renewables are usually intermittent. So if you want something that's going to be nice baseload capacity producing power 24-7, you know, nuclear. So I think the fact that Microsoft chose Three Mile Island, I mean, Three Mile Island is not going to produce hundreds of gigawatts of electricity. So I think it was much more significant as a symbol
of just how committed they were to building all this electrical infrastructure. And, you know, I, my team, and some other folks from the DC think tank community just got back from a research trip to the UAE. Countries like the UAE are basically going to the largest AI companies around the world and saying, come here.
Because we have what you need. We have the ability to build a ton of electrical generation capacity and a ton of high quality data centers incredibly quickly. And that does really seem to be the locus of the competition in frontier AI right now. Is the United States okay with that? Having our
data companies that are producing AI products there in UAE? Great question. It's quite controversial right now in the U.S. national security community. I think the debate sort of hinges of, well, if we're going to be leaders in AI, that needs to be because our technologies are widely adopted, because we have customers around the world that are sort of
fueling the growth of this technology. And the UAE really markets itself as a doorway to the global south. You know, a lot of stuff that goes to India goes through the UAE. A lot of stuff that goes to Africa goes through the UAE. And that's true of both goods and data. So that's one side of the argument, the sort of pro being willing to build a lot in the UAE. On the flip side, there's a lot of concern about securing these types of data centers, right? We don't want to spend...
$20 billion training the latest, greatest AI model just to have China steal the model weights in a really easy cyber attack for near nothing in terms of cost. And so some folks will tell you that really the only way that you can secure it is by having it in the United States. In
In terms of what this document, the AI National Security Memorandum is, I don't know that it necessarily takes a definitive position on this issue. It kind of tries to have it both ways. The number one thing is it unambiguously says that we need to build a ton of AI infrastructure in the United States. So that's why we're adding, you know, again, in Jake Sullivan's, I don't know that he was really being definitive here, but speculating, right, that we might need to build hundreds of gigawatts and add that to the grid just for AI data centers.
That's a really we need to build it here kind of an argument. And that's reflected in this document, that prioritization. At the same time, the document also talks about how we need to have effective partnerships in AI, how we need to have a mechanism whereby our allies and partners can see themselves in this story, that it's not just the United States hogging AI for itself. But I would say the policy guidance on that part of the story is a lot less definitive. It's a lot less clear. More talks about goals, less talks about actions.
So, Greg, what are some of your other critical takeaways from this document that we haven't talked about yet? I think there's really three pillars to this document. The first is ensuring that the U.S. continues to lead the world in AI. You know, we've talked about the immigration part of that. We've talked about the energy part of that. There's also a research and development part of that. The second is talking about the national security uses of this. There's a lot of guidance to the U.S. military, to the U.S. intelligence community, basically saying, you
You need to be experimenting with frontier AI models. You need to be making sure that your digital infrastructure, your cyber infrastructure is compatible and ready for the latest AI models. You know, think about how long it's taken the U.S. DOD to get into cloud.
It was like 10 years after commercial industry that we had cloud. I think what the government is saying here is we cannot have that again with frontier AI systems. And if you think back to some remarks that Sam Altman made, I want to say six months ago, he described the performance of GPT-4, which at the time was the best model that they had out there, as, quote, mildly embarrassing at best.
So he was aware of all of the limitations of that system. So you might then ask yourself, okay, this is what the guy selling this technology describes it as. Why are we so gung-ho to tell the national security community that they have to adopt it right now? And I think there's a few answers there.
Number one is that cloud experience, right, of being five, 10 years behind the commercial state of the art and sort of an assumption that that is just completely unacceptable when it comes to AI. This technology is evolving so fast. It's becoming so powerful. You know, GPT-4,
when you compare it to GPT-2, is like Einstein compared to a cockroach. So it's not necessarily that GPT-4 is right now, right this second, this national security silver bullet. It's just that the trajectory of this technology is so astonishing. And if we don't start moving aggressively now, we're going to miss out in a big way in the not too distant future.
The third pillar, after harnessing AI for national security benefit, and I want to emphasize one more thing, I guess I should say, on the harnessing it for national security benefit is,
It talks about the counterintelligence needs of this. And Jake Sullivan explicitly said, we need more money and more people going into the business of protecting American AI systems, both in national security and also in commercial industry. One thing he did not say, but something that I have been recommending in my writings here at CSIS, is the intelligence community also needs to up its game internationally.
in terms of assessing adversary AI capabilities. You know, what is going on in China? I think about, for example, the fact that Gina Raimondo was surprised by the Huawei smartphone that came out, you know, powered by chips in violation of U.S. export controls during her August 2023 trip to China.
Why was she learning about that in China? Why did she not learn about that from the U.S. intelligence community? So that statement is not in this document. Of course, there's another classified version of this document. I have not seen it, but I hope it's talking about, you know, those types of issues. Those really matter. Now, there's a third pillar to this document, and that is governance of AI systems.
You might ask yourself, hey, this whole document is basically talking about pushing the gas pedal down to the floor in terms of accelerating AI. So why is there this other part of the story around governance, which is really associated in some people's minds with slowing things down? Right, regulation. Exactly. A lot of people see it as a synonym for regulation. So the Biden administration and Jake Sullivan in particular anticipate that objection and just completely disagree.
And it's not because they say that this governance framework, which is the sort of companion document that came out today, the governance framework for AI and national security, it's not because they're saying that this slowing down is worth taking on just because of our need to mitigate risk. They actually say that the absence of governance has been a critical driver in not adopting this technology. And here's what they mean by that.
Considering, for example, the debate on autonomous weapons. You know, there is a policy on autonomous weapons. It was updated in January 2023, originally came out in 2012. That...
document was so misunderstood by so many people, including in the Department of Defense. You would see senior leaders in the military or senior leaders in the civilian part of the DOD literally say incorrect things about what that policy actually said. And so one of the things they really needed to do was to make the policy way more clear.
What is allowed, what is not allowed, under what circumstances, with what risk mitigation measures taken in place. And so a lot of times in a bureaucracy, in a government agency, if you don't know whether or not something is allowed, there's a lot of incentives to be conservative and risk averse in terms of adopting that technology or taking on that new approach.
And so the Biden administration is basically saying that this governance framework that we're putting on here is actually going to accelerate AI adoption because we've removed all this uncertainty. We've removed all this unclarity.
And we've told people there are prohibited use cases when it comes to AI national security. Two that are talked about, not in the document, but something that I've heard from other government officials is, for example, any AI system that is going to unlawfully suppress or burden the right of free speech or the right to legal counsel.
But another thing that's prohibited is getting in the way of the United States president's command and control authority over nuclear weapons, whether to use them or to not use them or to turn them off or anything like that. So there are sort of prohibited use cases. And then there's also these high-risk use cases, which basically says this is not banned, but there's additional technical scrutiny or additional process scrutiny or additional due diligence that is required. But once you do that, it is allowed. And so they're saying, you know, these are the governance –
mechanisms that we need in place to go faster in AI. One analogy that I think is helpful here
I talked earlier about putting the pedal to the floor on your car. How fast would you drive if you didn't have brakes? Sure, you drive really slow. You drive really slow because you're terrified of crashing all the time. And so what they're basically saying is we're putting brakes on the car. Right, because everybody's terrified about this. Exactly. And so by putting brakes on the car, we're giving folks more of the control and clarity that they need to be willing to drive faster. I see. So they see this as accelerating it.
So those are the sort of three pillars of the document and the overall approach. All right. We get that now, but let's talk about what's next. Now that it's been released, what's going to happen or what is supposed to happen?
There's a lot of thou shalts. There's a lot of, you know, this organization shall, that organization shall. So, you know, this document is about 40 pages long. And for a lot of different agencies and government, they've got a new to-do list. One of the things that the White House generally does is, you know, they're not claiming to be experts on this.
every nitty gritty detail of immigration reform. What they're just saying is, this is our policy, you, DHS, you other agencies, go figure it out how to make this actually happen at the nitty gritty level.
So you see, for example, within 180 days of this memorandum, the chair of the Council of Economic Advisors shall prepare an analysis of the AI talent market in the United States and overseas. And then, you know, once we understand the talent market, take steps to change how we do immigration policy for those folks who have AI talent, which is a different part of this. The AI Safety Institute
which exists in the Department of Commerce, was already established a while ago. This document sort of constitutes its first official charter. I mean, its roles and responsibilities are defined to a degree of precision
That was not previously the case. And so now the AI Safety Institute is on the hook within 180 days to do a lot of red teaming analysis, you know, on a voluntary basis with a lot of the leading AI companies to have the results of there. And they're looking for a lot of national security use cases. One of the things that you see throughout this document is the risk of AI security
in the domain of biological and chemical weapons. And so what are they thinking about here? The reason why they're concerned is that right now, making a biological weapon is kind of hard. You need a lot of expertise. And AIs are proposing to be experts. Sure. And so what if, you know... Just as it has great applications for...
Medicine. Healthy medicine. Exactly. It's got this. Exactly. And, you know, there's a lot of gene synthesis companies out there who will just make any DNA strand, you send them. So that's what they're looking for in a red teaming perspective. There's also a red teaming perspective on, you know, privacy or civil liberties types concerns. But all of that is now tasked out to the AI Safety Institute. There's like a million more taskings we could go through. But suffice it to say, a lot of people have a lot of work in Washington, D.C. now.
Greg, finally, you know, we're in election season. We're about two weeks out of a very consequential election. How might the upcoming election impact the implementation of this document? Great question, because you might have heard for those who are listening closely, some of these deadlines were 180 days from now. That's right. We will have a new president. Right. 180. That is certain. That is certain.
So the question then becomes, is a Trump administration going to continue all of this action? Is a Harris administration going to continue all of this action? In the case of a Harris administration, Vice President Harris is sort of the unofficial, or at least has served as the A.I. SAR in the Biden administration. And I haven't heard anything coming out of her campaign that suggests there's anything in this document that she would object to.
So my sort of base case hypothesis is that all of this goes forward in a Harris administration. In a Trump administration, former President Trump in the Republican Party platform explicitly said he was going to repeal the AI executive order.
Well, this memorandum was tasked out in the AI executive order. And there's been folks in Republican policy circles, including Trump himself, who have expressed a lot of skepticism over any effort related to AI governance, you know, because they view it as excessively regulatory and getting in the way of industry innovation. Even though literally, you know, a group of 60 AI companies and industry organizations just on October 21st earlier this week
sent a letter to Congress saying that they wanted action to codify the AI Safety Institute. Yeah, they want to be regulated, and Sam Altman's actually testified to that effect. Well, in the case of the AI Safety Institute, all of that is taking place on a voluntary basis. It does have some implications for what might happen in a lawsuit, and so there are regulatory implications. But what the AI Safety Institute is doing is really giving sort of an official government –
Blessing is probably the wrong word, but they're just saying like, this is the guidance for what it means to be a responsible actor in the case of AI safety. Did you do the appropriate due diligence, all that stuff? The testing at this stage is voluntary, but all the companies have said, we want this. Like it's really helpful to us to have the government as a impartial third party actor sort of providing the services to us. So that's what I'm sort of coming down on the Trump side of the equation is there's some stuff in here that is,
is obviously what the Trump administration is going to want, right? How do you get rid of red tape in permitting and other stuff to build a ton of power plants? A lot of Trump administration AI policy folks, I think, would be very on board with that. Immigration reform, they're probably going to be quite a bit more skeptical of that type of action. This AI governance stuff,
I think that's at risk, but I do think that it's possible that we move from, in this scenario, a Trump campaign to a Trump administration. Those folks are going to engage with industry. They're going to hear the strongest arguments, and maybe they'll have a different tune. So it's fluid? It could be fluid. What I'm saying is the future is uncertain. There hasn't been absolute clarity on who would be leading AI policy in a second Trump administration, and so it's just much more difficult to say what the future would hold.
Greg, this has really helped us understand this document and its implications a lot better. So thanks very much. Thank you.
If you enjoyed this podcast, check out our larger suite of CSIS podcasts from Into Africa, The Asia Chessboard, China Power, AIDS 2020, The Trade Guys, Smart Women, Smart Power, and more. You can listen to them all on major streaming platforms like iTunes and Spotify. Visit csis.org slash podcasts to see our full catalog.