cover of episode Chinese Perspectives on Military Uses of AI

Chinese Perspectives on Military Uses of AI

2024/12/17
logo of podcast China Global

China Global

People
B
Bonnie Glaser
S
Sam Bresnick
Topics
Bonnie Glaser:中国在十四五规划中优先发展人工智能等新兴技术,人工智能赋能的军事能力正日益成为中国未来战争概念的核心。 Sam Bresnick:中国认为人工智能是未来军事现代化的基础,智能化战争将以作战速度加快、打击精度提高、自主性增强以及人工智能辅助决策为特征。中国学者也认识到人工智能的风险,例如AI系统的可解释性和可信度问题、误伤友军和平民的风险等,并建议采取多种方法来减轻这些风险。中国在人工智能军事应用方面存在"追求卓越还是追求数量"的隐性争论,以及中国军事文化、官僚制度和政治体制是否利于人工智能的最佳应用的隐性争论。中国专家对现有AI技术能力缺乏信任,因为在军事环境中,AI系统需要更高的可信度才能用于生死攸关的决策。中国军事人工智能发展和实施面临的制约因素包括数据不足、网络安全问题、测试评估以及标准化问题。中国会参考其他国际专家的著作,特别是美国专家的观点,并主张在人工智能安全方面进行国际合作,但更多关注民用领域。中国对人工智能风险的讨论表明,他们认识到人工智能是一把双刃剑,并试图与美国管理风险,例如通过持续的人工智能对话。人工智能可能正在改变中国在南海和东海地区的威慑策略,增加快速决策和升级的风险。美中两国应就人工智能在军事领域的应用进行更多对话,例如控制升级和制定规范,探讨人工智能风险的相互认知、测试评估方法以及网络安全等方面。美中两国可以采取单方面行动来降低风险,例如美国声明不将人工智能用于核武器决策。美国应继续强调负责任地使用人工智能,并以此为榜样,促进与中国的对话和潜在行动。 Bonnie Glaser: 中国对人工智能军事应用的观点反映了其在技术发展和军事现代化之间的平衡。一方面,中国积极发展人工智能技术,并将其视为未来战争的关键;另一方面,中国也认识到人工智能带来的风险,并寻求方法来减轻这些风险。 Sam Bresnick: 中国对人工智能军事应用的观点是复杂且多方面的。它既体现了对人工智能军事应用的乐观和积极态度,也体现了对相关风险和挑战的清醒认识。这种复杂性体现在对技术路线(追求卓越还是数量)、组织结构适应性以及国际合作等方面的讨论中。

Deep Dive

Key Insights

Why did Sam Bresnick write the report on Chinese perspectives on military uses of AI?

Bresnick was interested in understanding what Chinese experts themselves were saying about AI capabilities and risks, as much of the Western press coverage focused on China's centralized governance advantage without delving into Chinese perspectives.

What is China's view on the impact of AI on future military capabilities?

China sees AI as foundational for its military modernization, enabling what they call 'intelligentized warfare,' characterized by increased speed, precision, and autonomy. They believe the country that develops these systems faster will have a significant advantage in future wars.

What risks do Chinese experts associate with AI in military applications?

Chinese experts are concerned about the lack of explainability and trustworthiness of AI systems, which could lead to escalations, accidents, or mistakes. They also worry about the potential targeting of friendly forces or civilians by AI systems.

What are some areas of debate among Chinese scholars regarding AI and military applications?

One debate is whether China should focus on high-quality, 'exquisite' AI systems or rely on mass production of 'good enough' systems. Another involves whether the hierarchical Chinese military culture allows for effective use of AI, requiring potential reforms in decision-making processes.

What evidence shows China's progress in military applications of AI?

Evidence includes reports in Chinese media, procurement documents, and academic publications discussing specific AI algorithms and military problems. The Department of Defense also highlights Chinese advancements in AI in its annual reports on China's military.

Why is there a lack of trust among Chinese experts in existing AI capabilities?

Chinese experts struggle with the 'black box' nature of AI, where decisions are not easily explainable. Additionally, the hierarchical and bureaucratic nature of the Chinese military, where trust is low between superiors and subordinates, further complicates the adoption of AI systems.

What constraints do Chinese experts identify in the development and implementation of military AI?

Constraints include data issues, such as a lack of militarily relevant data due to the absence of recent conflicts, difficulties in sharing data across military branches, and challenges in digitizing data. Other issues include cybersecurity, testing and evaluation, and the need for standardized AI development.

How do Chinese experts propose mitigating AI risks?

Chinese experts suggest using synthetic data, combining human knowledge with AI systems, and improving model training to enhance explainability and trustworthiness. They also advocate for international engagement on AI safety, particularly with the United States.

What implications can be drawn from discussions on AI risks within the Chinese system?

The discussions suggest that AI could change China's approach to deterrence in the South and East China Seas, as the speed and autonomy of AI systems may increase the risk of escalation. There is also a recognition of the need for international dialogue to manage AI-related risks.

What areas should the U.S. and China discuss regarding the military use of AI?

Potential areas for discussion include escalation dynamics, norms for testing and evaluating AI-enabled military systems, and the role of AI in cyber defense or space operations. However, some of these topics, like cyber and space, may be too sensitive for immediate engagement.

Shownotes Transcript

Translations:
中文

I'm Bonnie Glaser, Managing Director of the Indo-Pacific Program at the German Marshall Fund of the United States. Welcome to the China Global Podcast. In China's 14th five-year plan that spans from 2021 to 2025, priority was assigned to development of emerging technologies that could be both disruptive and foundational for the future.

China is now a global leader in AI technology and is poised to overtake the West and become the world leader in AI in the years ahead, or at least they hope so. Importantly, there's growing evidence that AI-enabled military capabilities are becoming increasingly central to Chinese military concepts for fighting future wars. A recently released report provides insights on Chinese perspectives on military uses of AI.

Published by Georgetown's Center for Security and Emerging Technology, known as CSET, the report illustrates some of the key challenges Chinese defense experts have identified in developing and fielding AI-related technologies and capabilities.

I'm delighted to host the author of the report today, Sam Bresnik, who's a research fellow at Georgetown CSET, and he focuses on AI applications and Chinese technology policy. Welcome to the China Global Podcast, Sam. Thanks, Bonnie, for having me. Thrilled to be here.

So first, can you explain why you wrote this report? There's obviously many things that have been written about Chinese policy about AI, but you focused on what the Chinese themselves have written about. So what got you interested in that? Yeah. So a few years ago, there were a flurry of articles in the Western press about

making the claim that China was overtaking the United States in, you know, field developing and fielding, uh, AI and related emerging technologies for military uses. Uh, and essentially these, these articles all made the same claim that, you know, China's centralized, uh,

governance structure and political system gave them an advantage in basically identifying technologies and then making, you know, the government apparatus and even private sector enterprises develop and then use these technologies.

And, you know, I thought this was an interesting insight or series of insights, but I found that it generally ignored what the Chinese themselves were saying about both, you know, how Chinese capabilities matched up with those of the United States, as well as how Chinese experts viewed China's own capabilities. And so what I decided to do was use CNKI, which is, you know, a repository of Chinese academic knowledge.

writing and basically use a selection of dozens of articles published between 2020 and 2022 using a paired keyword search methodology that helped me identify

Articles that dealt with the future of warfare and AI. And after reading these articles, you know, I thought I could basically write this report that I ended up writing, identifying where China both viewed itself in terms of developing and fielding militarily relevant AI systems themselves and also, you know, how those systems compared with the United States.

So what is China's assessment of the impact of AI and related emerging technologies on future military capabilities? What are the risks that they see? And how worried are they about these risks? Yeah, so on the first part of that question, a lot has been written about it. It essentially boils down to China views

as a foundational aspect of their military modernization going forward. China has this idea that they are now capable of fighting what they call informatized or informationized wars, you know, that basically they depend on information technology to make decisions, inform targeting, and basically run their military operations with information technology. The next step of this is what they call intelligentization, which is the use of AI and related emerging technologies.

And so they believe that intelligentized warfare will be defined by, you know, increasing speed of operations. The tempo of strikes will increase, that the strikes will become more precise. There will be more autonomy on future battlefields and that, you know, AI systems will help inform decision making.

And they think that this, you know, the country that develops AI-enabled military systems faster and better than other countries will have a significant advantage in future wars. And so because, you know, because the Chinese government essentially has identified this as an important step forward, you have a huge amount of investment in AI and related emerging technologies to essentially overtake the United States military in warfighting

capacity. Now, this approach is not without risks. There

is a lot of discussion in the West about how China might have different views of AI related risks, as well as some people have the view that they don't actually care about risks as much as we do. The United States Department of Defense has released several documents outlining risks and how to mitigate them. What I found in this research is actually that a lot of Chinese scholars are very cognizant of AI related risks and recommend several approaches to mitigating them. One of the big

risks that they mentioned, right, is sort of lack of explainability and trustworthiness of AI systems, right? You don't know how these AI systems are making decisions. You don't know how much you can trust them. And so the use of them could essentially lead to escalations, accidents, and mistakes that could lead to the outbreaks and escalation of wars. Another couple of risks are the targeting of friendly forces from AI systems, right? And then even the targeting of civilians. So

This report, I tried to outline as much as possible that within Chinese military academic circles and in military circles, there are discussions of risks that are occurring. In your research, did you detect significant differences among Chinese experts in their views, either on risks or other aspects of AI and its military applications? And did you identify any areas of debate?

Yeah, so that's a really interesting question. In general, I would say the vast majority, even almost 100% of the articles I read were very optimistic about AI being really important for future military operations.

Two interesting debates. I don't know if I would call them debates, but perhaps differences when reading between the lines here. There's a bit of a debate about whether China should go for the exquisite or go for mass. And what I mean by that is

In the United States, our military has prioritized exquisite systems, the best of the best, very expensive platforms, F-35s, aircraft carriers being examples of those where you think you have a quality advantage. You spend so much money and time and resources

developing these things, you think they will give you big advantages on future battlefields. Whereas China has gotten very, they've gotten a lot better at quality, but where they really excel is quantity. They now have more, you know, missiles and a lot of military platforms than we have.

And so there is a bit of an implicit debate going on in China with regard to AI and emerging technologies, whether they should really aim to push the quality forward or get to good enough quality and make up for that difference in quality potentially with us with more of them, with mass.

So this is one example. And drones are an excellent example of this, right? China has a big advantage in manufacturing capacity, right? So there's a thought among some scholars in China that China doesn't necessarily need to have the best drone software or AI, but can make up for that with just overwhelming numbers of these things. Another interesting debate that again, reading between the lines is basically a debate over

the Chinese military cultural system, I should say the Chinese military culture, bureaucracy, political culture, and essentially does the Chinese military system allow for the best use of AI? And what I mean by that is China has a very hierarchical, very bureaucratic military system.

It doesn't often allow younger, more junior officers to make decisions. And so there are some people writing in China about the need to reform decision making cycles or processes in order to take full advantage of AI and related emerging technology.

So these are two debates, you know, I wouldn't necessarily call them explicit. They're sort of taking place, you know, implicitly between the lines here, but I think are important for thinking about how China approaches AI and related emerging technologies going forward. What is the evidence in open source that shows that China has been making progress in the military application of AI?

So there are a number of data sources here. A big one is simply Chinese media, right? PLA Daily and the South China Morning Post, for example, will often write about, quote unquote, breakthroughs in military technologies, right? Swarming technologies are a big one. Decision-making systems, right?

And then you also have, you know, from the Department of Defense, they release a report on China's military advancements every year. And so every year now for the last few years, we've had more and more evidence in those reports of Chinese advances in AI and related emerging technologies. And then you also have random media reports, right, about various things that get reported. For example, right, in the last couple of weeks, we had a report about

China's using of, uh, China's use of, of, I believe it was meta's open source, uh,

LLM in, you know, using that LLM for military operations or military planning in some way. So there's media reports, right? From where I sit, there's also a good amount of information out there on procurement documents. CSET has done some work on this, using open source procurement documents to look at what the PLA is buying in terms of military operations.

uh relevant ai and then the final thing i'll say right uh is that again in in academic journals you'll have computer science engineering and then military journals uh

with articles talking about specific algorithms aimed at solving a military problem here, or a certain computer science technique to train drone swarms, for example. So basically across media, across academic publications, and then in open source documents or open source venues, there's mounting evidence of the importance of AI and China's investment in this. And the last thing I'll say is,

We also know from Chinese strategic documents about the importance of military-civil fusion. And this is something that continues apace. And essentially, that's the idea that you want to get the public sector and the private sector working on AI-related technologies to basically push forward military modernization. So I would say there's a bunch of evidence. You just need to know where to look for it.

Your report says that there's a notable lack of trust among Chinese experts in the existing capabilities of AI technology. I found that really interesting. So can you elaborate on it? Yeah. So, you know, the interesting thing about AI right now, right, is everyone is talking about how important it is. But

We're often using it in very low stakes scenarios, right? Like you're asking it to summarize an article, you're asking it to generate some talking points. But you have to remember in military context, these are life and death decisions you're making a lot of the time.

And so there is a higher level of trust you need to basically get these systems online. And so while doing this research, I found a lot of Chinese authors, military authors, were writing about the difficulties associated with

Knowing how AI systems make decisions, right? This is some people call this the black box problem. You'll give an instruction and then there will be an output, but you don't know how the system got from A to B.

And this is super important because you want to be able to understand how an AI system makes a decision in order to trust that system. So throughout these documents I read, there were dozens of mentions of lack of explainability, lack of reliability, lack of security, which all impact the trustworthiness of these systems.

And again, just to harken back a little bit to an answer I gave previously, the PLA in general is a low trust organization. As we've seen in the last couple of weeks, very high ranking officials can be purged at any time. You don't have superiors trusting their junior officers, and then both of those sides might not trust

you know, the systems themselves. And so I think going forward, that is a big stumbling block, potential stumbling block for the PLA is can they design systems that they trust enough? And actually also, at what point do you trust in system enough to use it? Chinese experts have also identified a range of deficiencies that might constrain the development and the implementation of military AI. So can you list a few of those?

Yeah. So these are basically they range from data issues to cybersecurity to testing and evaluation to standards. And, you know, the full list is obviously in the report, but I'll go through a couple here. You know,

To train artificial intelligence systems, you need a lot of data and you need high quality data. And this is an issue in China, even though we hear a lot that China has a data advantage because they have so many people, they have so many transactions captured digitally. In the military sector, they actually have a lack of data issue because they haven't fought a war in 45 years. Right.

Right. And so compared to the US, where we have, you know, millions of hours of drone footage from the Middle East, China actually has less militarily relevant data than we do, which makes it difficult to train models. Right. They have they have small they have a smaller selection of data.

Beyond that, there is an issue with sharing that data among different branches and services of the PLA, different arms and services of the PLA. There's some stove piping going on, it appears, where, you know, what the PLA Navy has, it might not share with the PLA Air Force. And then there's also issues with digitizing data in a really interesting article that appears in my report.

One of the authors notes that some naval gun firing data is written down on paper and seldom digitized. So even though we think of China as a fully digitized society, the PLA is still having some trouble, it would appear, with fully digitizing its data resources. And then just finally on data issues with analyzing the huge amount of information you need.

Modern warfare generates a huge amount of data from sensors, you know, in the sea, on the land, in air, in space. And so you need to get all of this data together and analyze it quickly. And there's some concern about that.

Another thing I'll mention just briefly is standards, right? CSET research has found that China's defense industrial base has become more diffuse over the years. Beijing used to rely on a handful of SOEs to do most of its military supplying. Now there are really thousands of companies involved in this.

ensuring that there are standards for AI and related technology development seems to be a challenge because when you don't have shared standards, you're going to have systems that are developed by different companies that might not talk to each other, right? Or work together in a way that would create inter-system interoperability, which is an important thing going forward. You know, the last thing I'll say, right, is on testing and evaluation.

AI systems are relatively nascent technology. I would say we're probably having issues in the United States military figuring out how to properly test and evaluate some of these systems. I think China is in the same boat, at least according to the articles I read is how do you create testing and evaluation procedures that account for all possibilities in future wars, right? Where you have the fog of battle.

You have electronic warfare, you can't properly see or understand what's going on. How do you basically mimic those situations in peacetime to train your models or to evaluate your models or systems in a way that guarantees they will work how you want them to in the future? These are all challenges, and I don't think anyone has great ideas right now as to how to fully solve these issues.

When the Chinese think about how to mitigate AI risks, are they reading the writings of other international experts? Obviously, there's a lot about mitigation of AI risks in literature here in the United States, and I assume elsewhere.

And do they advocate international engagement on AI safety? I know that there is an existing dialogue between American experts and Chinese experts on AI, and it's been feeding into some of the policy discussions in Washington, D.C. And of course, we recently saw a

the signing of an agreement or an understanding between President Biden and Xi Jinping on having humans in the loop on any decision making in nuclear weapons and not leaving that to AI. And that was something that actually the two governments worked on for several years through track two discussions and then talked about it in in in official settings.

So how did the Chinese think about mitigating AI risks? And do they even advocate for going or curbing some of the potential uses, which I think some Americans advocate that?

Yeah, so there's a lot there. I'll take a few stabs at that. On the first question, are they reading international experts? Absolutely. I would say American experts are really, I think, at least in the articles I read, the most cited.

There are various people, I won't name names, but there are some people who show up all the time in these documents. And you wonder sometimes, are these documents revealing Chinese perspectives or are they just sort of regurgitating US perspectives and presenting them as their own? Because a lot of times there's a lot of overlap.

Sometimes there's not. But that's number one. In terms of risk mitigation, there's sort of technical risk mitigation techniques that they put forward, right? You know, there's a focus on basically coming up with techniques to expand data sets, to use synthetic data and to sort of try to combine human knowledge with computer systems to try to basically create more data that would allow their systems to work more

better. There's model training

recommendations for trying to get around some of the explainability issues to make these things more explainable, more reliable, more trustworthy so that you can try to avoid some of the unreliability that would lead to potential escalations and risks. Interestingly, there's also a lot of talk about, as you mentioned, the need to engage internationally.

I would say something that I found very interesting is the China-US AI dialogue has continued over the last, you know,

or so, year plus, at the same time as the U.S.-China dialogue on nuclear issues has been shut down by the Chinese. And I think this is evidence on some level that China is really taking these issues seriously and is concerned enough about them to want to basically understand where the U.S. is coming from and try to mitigate risks like this. Now, in general, I would say China is very interested in AIO.

AI safety, international engagements with a focus on civilian systems. It's been a little bit less forthcoming in explicitly military forums. It does go to the RE-AIM summit. It's one of the big countries there. It's involved in the UN. But China hasn't really released that much

guidance for military AI that it's pushing forward or pushing into the international conversation. And so something that I found interesting while I was doing this research was that

there seems to be a big appetite within the, you know, defense expert, military AI defense expert zone, if you will, in engaging on these issues to share the Chinese perspective, because there is a sense that the U.S. is really taking the lead here with the political declaration on military AI, for example. And so wanting to sort of, you know,

create a channel for China to both engage here and share its best practices and views on the matter. What implications can be drawn from discussions on AI risks that are taking place within the Chinese system? Yeah, so that's a really interesting question. There's a general take that

AI is what they say all the time, that it's a double edged sword, right, that it creates great opportunities, but that it also, you know, includes great risks. And so, as I mentioned, right, one of the implications, I think, is that it's super interesting that they have continued this AI dialogue with the United States, despite shutting down the one on nuclear issues. You know, this means they're trying again to manage risks with the US to some extent.

You know, another really interesting thing that I think deserves to be talked about a little bit more and that I've written about a little bit is that, you know, AI appears to be changing things.

I shouldn't say that. I should say there are voices in China who believe that AI is going to change or is already changing the Chinese approach to deterrence, or I should say should change the Chinese approach to deterrence in and around the South and East China Seas. And this is because essentially the PLA has for a long time

had this idea that it's very good at managing escalation and putting a lid on escalation dynamics. And the introduction of AI and related emerging technologies and autonomy could increase the speed at which decisions get made and escalations happen. And so there's a sense perhaps that this is going to make it more risky for China to basically continue the risky deterrence actions they take in and around the South and East China Seas.

And so one of the implications and one of the things I think the US should discuss with China going forward is how to basically think about how AI changes escalation dynamics or how it impacts escalation dynamics. And is there a way, you know, maybe beyond hotlines, because we've seen those don't work well, those don't work so well sometimes. Are there, you know, norms and rules of the road around that issue that we can, you know, discuss?

I mentioned earlier the Xi Jinping Biden agreement on maintaining human control over the decision to use nuclear weapons. And you just, I think, brought up a good suggestion that discussions take place on

controlling escalation. So those are two good areas. Are there other areas in which you think that the military use of AI should be discussed between the United States and China? Is there anything else that lends itself to an agreement and something that could be tested

Clearly, in the case of keeping humans in the loop on decision making over nuclear weapons, it's going to look good on paper, but I doubt that we're going to have any on-site verification. So we probably really won't know whether they are complying with it. And hopefully we will never test that. But are there any other issues that come to mind?

Yeah, no. So just briefly on the nuclear side of things, you know, interestingly, pretty much all of the papers I read noted that

the combination of AI and nuclear command control and communications was a bad combination. And so, you know, I think that's a little bit encouraging that within the Chinese, you know, defense sector, there is an idea that, you know, AI and nukes should not be mixed. So I'm hopeful, you know, I agree with you, it's going to be very hard to verify that, but I'm hopeful that their own

the preponderance of evidence, at least that I've seen, would suggest that they don't want to do that. In terms of other areas to explore, one is this escalation dynamics topic that I just mentioned. I think it would be really interesting to or useful to talk about both sides' perceptions of AI risks, right? We have a sense of what their ideas are regarding AI risks, but

you know, as this technology develops, as they, you know, as both sides really roll out more AI enabled military systems, are they seeing new risks pop up? Where do we see risks, right? Can we work together on managing those risks? You know, I don't think we're going to get to a point in the near or medium term where we're signing, you know, binding arms control agreements, but I think there is space for the establishment of norms and, you know,

understanding how to mitigate risks could be fertile ground for discussion. Another one would be norms around testing and evaluating

AI-enabled military systems, right? As I mentioned before, this is a really new area that both sides, I think, are still learning about. And so sharing best practices, right, about how to test and evaluate these systems could be useful. And then areas that are a bit more sensitive that I'm less hopeful China would be willing to engage on, and I'm not sure how willing, you know, the United States would be willing to engage on. But, you know,

There's been a lot of discussion recently about Chinese hacking efforts and penetration of critical infrastructure, right? And the role potentially that AI plays in cyber defense or cyber offense in these contexts could be an interesting thing to discuss, as well as how AI could impact space operations is another area here. Those are both very, very sensitive on the Chinese side. I don't really see...

a huge amount of runway for those, but I think it would be useful to inquire as to whether or not those could be discussed at the track one level, 'cause I do know they are being discussed at the track 1.5 and track two levels. - Great suggestions. Are there any steps that could be taken unilaterally? I agree with you that signing agreements or anything that sort of smells of arms control, the Chinese are essentially allergic to.

and they don't want to be treated like the former Soviet Union. Maybe we will eventually get there, but I'm not terribly optimistic about the short term. Are there any steps that could be taken unilaterally by both sides that actually could reduce risks?

Yeah, that's a really interesting question. I mean, again, I think back to this nuclear angle, right, where the US actually came out and said, we will not include AI in nuclear decision making. And that was a year or a couple of years ago. And I

From what I heard from from Chinese sources was basically there were two reactions to that. One was saying, oh, wow, great. The United States has done this and agrees with this and we should consider doing the same. The other was a total lack of trust and an idea that we would say we're doing that.

uh, but then, you know, not actually do that. Uh, and then China would be at a disadvantage if it, if it pledged to do that. So, you know, I don't have any great suggestions on unilateral actions. I do think going forward, right, I would hope that we continue that the United States continues to engage, uh, Chinese counterparts on these, on these discussions to see, you know, what's possible, uh, in this, you know, very tense security environment, uh,

we're in right now and are going to be in going forward. But I think the US focus on responsible use and trustworthiness in AI systems is a good example to set. And hopefully we'll convince at least some in Beijing that we are not necessarily going to push the envelope to

complete military advantage at the expense of AI safety. And I think if we continue to reinforce that, there could be some areas for dialogue and potential action going forward. Again, I agree with you short of binding arms control agreements.

We've been talking with Sam Bresnik, who is at Georgetown Center for Security and Emerging Technology, written a great report on Chinese perspectives on military use of AI. So urge all of our listeners to go to Georgetown CSET's website and read the report. Thanks so much for joining the China Global Podcast, Sam. Thanks so much, Bonnie.