cover of episode Will AI be the next arms race?

Will AI be the next arms race?

2024/9/30
logo of podcast Chinese Whispers

Chinese Whispers

AI Deep Dive AI Chapters Transcript
People
C
Cindy Yu
M
Matt Sheehan
Topics
Cindy Yu: 本期节目探讨了中国在人工智能领域的进展,以及中国政府对生成式AI的监管政策。中国虽然禁止使用ChatGPT,但本土AI产业发展迅速,由百度、字节跳动等科技巨头引领。中国AI产业不仅涵盖生成式AI,还包括面部识别、机器人、自动驾驶等领域。中国政府对AI产业发展既有支持,也有严格的监管,以确保AI技术不会被用于危害国家安全和社会稳定的用途。 Matt Sheehan: 中共将技术分为生产性技术和娱乐性技术,前者有助于增强国家实力,后者则不然。生成式AI处于一个尴尬的境地,它既可能增强国家实力,也可能产生政治上不可接受的内容。中国政府对生成式AI的监管旨在引导产业发展方向,避免落后于西方国家,同时又防止AI技术被滥用。中国AI监管区分面向公众和非面向公众的应用,对前者提出了更严格的要求,以确保其不会产生政治上不可接受的内容。中国AI产业发展也面临挑战,例如资金短缺和芯片限制等。 Matt Sheehan: 中国政府对AI产业的支持体现在多个方面,包括直接补贴、免费办公空间、税收优惠等。所谓的“国家AI团队”更多的是一种宣传策略,而非严格的任务分配。中国政府的AI监管政策经历了从最初的严格到后来的相对宽松的转变,这反映了政府在控制内容和保持行业领先地位之间的矛盾。中国政府也积极学习其他国家,包括美国的AI监管经验,并将其应用于自身的政策制定中。

Deep Dive

Shownotes Transcript

Translations:
中文

Before we get into today's episode, I wanted to tell you about the live Chinese Whispers podcast I will be hosting at London's Battle of Ideas Festival. On the 19th of October, myself and a panel of special guests will be talking the latest on China's economic slowdown, and we'll be asking what are the social and political implications of it? Is China in decline?

Chinese Whispers listeners can get a 20% discount on the ticket price with the code WHISPERS24. Click the link in the description to find out more and get your ticket. Hello and welcome to Chinese Whispers with me, Cindy Yu. Every episode I'll be talking to journalists, experts and long-time China watchers about the latest in Chinese politics, society and more. There'll be a smattering of history to catch you up on the background knowledge and some context as well. How do the Chinese see these issues?

The release of ChatGPT in late 2022 brought home the sheer potential of artificial intelligence and the speed with which developments are being made. It made AI the hot topic, from business to politics and yes, journalism. This was true in China too. When I visited earlier this year, AI was discussed almost ubiquitously. This has happened despite the fact that ChatGPT has never been allowed to be used within Chinese borders.

Instead, China has a rich landscape of homegrown AI products, where progress is being led by tech giants like search engine Baidu and TikTok's owner ByteDance. So, already we are seeing a bifurcation in the AI worlds of China and the West, just like other digital spheres of social media and e-commerce.

This episode will peek over the Great Firewall once again to update listeners this time on China's progress on AI. The country is fast becoming a superpower in the tech, even as it limits the freedoms its generative models can have and keeps out some of the world's leading companies. Could this be the next arms race? I'm joined today by the researcher Matt Sheehan, fellow at the Carnegie Endowment for International Peace and a longtime watcher of China's tech scene. Matt, welcome to Chinese Whispers. Thanks very much for having me.

Now, I think when it comes to AI, most non-experts like myself first think of generative AI. So the large language models like ChatGPT, which have made such amazing progress in the last two years. I now use ChatGPT pretty regularly in my personal and professional life. But the app isn't available in China. And I want to start there because I think that underlines the slightly tricky relationship the Chinese state has with this particular form of AI, generative AI. So can we start with that dynamic?

The crux of which I think is because chatbots are hard to censor.

Yeah, I think at a high level, the CCP tends to divide technology into productive technologies and frivolous technologies, where productive technologies are things that are adding to national power. You know, they are helping the CCP achieve its economic goals, its political goals, its goals on the international stage, just in terms of like aggregate national power. And then frivolous technologies are...

Pretty much everything else, usually in the realm of, you know, online entertainment, even things that we tend to think of as being relatively economically productive, like, you know, ride hailing apps. I think there's just a kind of a hardwired ideological stance that, you know, real power and real strength comes from industry and military and stuff like that.

So, you know, I think this distinction predates AI in the sense that the CCP was always very willing to crack down on its technology platforms that were primarily about online content. You know, they didn't have any problem cracking down on Weibo back in the day. During the tech crackdown of sort of 2020 to 2022, they didn't have any problem cracking down on Alibaba and Tencent and these big platforms that they don't see as contributing to sort of hard national power.

And I think that points at a really tricky spot for the party as it relates to AI. They've long said that they view AI as extremely important technology. They did their national AI plan back in 2017, saying they want to be the world's leading AI power by 2030 because they see it as really productive and helping in a bunch of ways, including surveillance of the population through facial recognition and whatnot.

And generative AI sits at this uncomfortable space where it is primarily about creating content. It's about creating words, it's about creating images, even videos, audio.

But there's also a chance that it ends up being an extremely productive technology. It might be that this is the frontier that maybe is pushing towards artificial general intelligence, which would be an extremely productive technology. And so it pulls them in these two different directions. We're very worried about the content that it produces from a political perspective, from a social stability perspective.

And we also want to foster this industry because we don't want to fall behind. This might be a sort of building block of national power going forward. And that's where you see some of their conflicting attitudes or sometimes going back and forth on policy when it comes to generative AI.

So they would, in an ideal world, maybe prefer to develop it in a VAT and then once it's safe, release it to the populace. But that's literally impossible with generative AI. It is impossible, although they have sort of taken some steps in that direction, in that their generative AI regulation, which

which regulates sort of the outputs of generative AI. In it, they made a distinction between public-facing generative AI applications and non-public-facing applications. They basically said, you know, if you do a public-facing application, you have to adhere to all of these requirements, which are, they've watered them down somewhat, but they're pretty onerous in terms of proving that your chatbot is not going to produce politically sensitive content.

But that only applies if you're creating a chatbot that is sort of facing to the public in one way or another. If you want to create one just for R&D purposes, if you're doing sort of a purely industrial application of it, you essentially are excluded from this application. And they've created something of a gray area in between these two things if you're creating like a sort of a business-to-business application, an enterprise application.

It's not entirely clear where the line is drawn between public facing and non-public facing, but they, you know, in creating that distinction, you can very clearly see the ideology and the goals there. They'd like to direct the industry's energy away from just kind of fun chatbots, content generation, things that people play with and towards economically productive applications of the technology as they view economically productive.

Yeah, everything has to have positive energy and full purpose. So what does that mean for your average Chinese person? You know, if it's not chat GPT that you're using, are they playing with generative AI things? Are there large language models that they are using? Yeah, absolutely. There was in some ways a lag period. So chat GPT comes out at the end of 2022. And for the first, you know, six months or so of 2023, it's

Globally, you were seeing lots of similar applications pop up. But in China, essentially, the government told people, you know, hang on a minute, like Baidu was ready very quickly to introduce its own chatbot. And the government said, you know, you can release this to a small group, but wait until we pass this regulation.

They passed the regulation in the middle of 2023, and they start to implement it in the second half of 2023. And then you started to see a number of Chinese chatbots come online. So Baidu has one called ErnieBot. Alibaba has one called Tianwen or Qwen.

And you have a lot of startups in the space. Jurupu AI is one of the leading generative AI or large model startups, Moonshot AI, others. So there is a pretty, Chinese consumers, Chinese users have an array of choices when it comes to the chatbots, and some of them perform extremely well, not that far behind some of the leading US chatbots in terms of how they score on various benchmarks.

But all of these products have gone through a very intensive vetting process that is they try to say it's not a licensing process, but in some ways it has kind of become a de facto licensing process in which they have to prove to the regulators that this chatbot is not going to produce politically unacceptable content or content that's unacceptable across a few other dimensions, including sort of

Yeah, gender bias, racial bias, a few other things. But clearly the core concern, the real hard line concern is around political content or content related to social stability. I was reading a recent report from the Wall Street Journal, which said that companies working in this area have to submit a data set of 5,000 to 10,000 questions that the model would decline to answer.

half of which relate to political ideology and criticism of the CCP. And a user who asks improper questions five times a day will have their service halted. Obviously, that's not limitations that chat GPT has. Although sometimes it does refuse to answer my questions. Yeah, I mean, it's interesting in that there are huge differences between the United States or, say, the West and China on these fronts.

But companies in both ecosystems do have to be very concerned about what their chatbots say. The difference is that in the US and in the West more broadly, they're worried about the public reaction to things. They have to be concerned if their chatbots are either going to say something that's too offensive or that's too quote unquote woke. Google got in big trouble for one of their chatbots allegedly being too woke across various lines. They had to sort of make a lot of adjustments on that front.

But the difference is that it's the companies adjusting essentially to market demands and to their broader kind of reputational demands. Whereas in China, what that Wall Street Journal article was alluding to is this quite detailed Chinese standard about how do you test your chatbot before you sort of submit your registration papers with the government. And it says, you know, you have to ask it.

2,000 questions about political topics and 2,000 questions about sort of different things related to bias, 2,000 questions about sort of unfair business practices. And the model has to perform extremely well. And it's extremely nuanced as well, like with that refuse to answer database. They specifically say that chatbot has to, you have to ask it, I think it's 5,000, maybe 10,000 questions.

I forget the exact number, but a large number of sensitive questions. And it has to, it can only refuse to answer a small portion of them because the CCP doesn't want their chatbots just to look dumb. You know, the easiest way for the companies to comply with this would be like, if you say anything about politics or international relations or Xi Jinping or whatever, you just refuse to answer. That would be the easiest way to comply. Right.

But the party doesn't want to make it so obvious that these things are heavily restricted. So they want to force the models to answer difficult questions, but answer them correctly as they see it. Gosh, that's a tricky job. Exactly. It's very tricky and it's a significant burden for the companies. It has dissuaded a lot of companies from creating public facing chatbots. And in some ways it's doing what the

regulators kind of hope for it saying, "You want to do generative AI? Why don't you do something that's more sort of enterprise facing or more about industrial applications?" There's still obviously a huge market for different versions of chatbots. And so for companies who see this as their sort of core competency, there's still a lot of companies that are willing to jump through all these hoops for the purposes of getting a full language model out to Chinese audiences.

Do you think that this stunts the development of generative AI in China, having these stipulations? Because I've also read from your own work that it's also the truthfulness of the inputs and outputs. And we know that AI can so-called hallucinate. It doesn't have to always be truthful in terms of what it puts out. And the inputs, you know, certainly in the West, you can just scrape so much data from the internet. You don't have to verify it. If that's also a stipulation, you know, how much do these Chinese regulations actually stunt the growth of generative AI in China? Yeah.

So I think they are having a sort of a dampening effect, I guess would be the way to put it. It makes it

more difficult or more risky to invest in one of these companies. And if you're one of these companies, you have to just invest a lot of time and energy in passing through these regulatory hurdles. But it's not a hard limit. If companies are willing to invest the time and energy in this, they probably can eventually get approval and then they probably can develop their chatbot in pretty meaningful ways. And they create very high performing large models. I think this is

In some ways, it could have been a lot worse. Some of what you were just referring to there about the sort of the truthfulness of the inputs and the truthfulness of the outputs. When they created their generative AI regulation, they had a draft version, and then they went through some period of sort of public debate about it, and they had a final version. And the draft version was extremely, really unrealistically demanding when it comes to this. They said everything that you train the model on has to be quote-unquote true and accurate, and everything that comes out of the model has to be true and accurate.

And they realized that that's an impossible standard and it would harm the industry in a way that the party doesn't want to for economic reasons, for reasons of national power. So between their draft version of the regulation and their final, they tamped it down. They didn't say everything must be true and accurate. They said you must take steps to effectively increase the truth and accuracy as opposed to taking the sort of absolute hard line. And it reflects the

Yeah, that tension that the CCP feels. They have an urge for control when it comes to content.

but they also have an urge to be at the forefront of this industry. And they're looking at their economy and they're feeling the need to sort of revitalize business confidence in a lot of ways and to sort of show that they're coming out of that era of the tech crackdown and they're re-embracing technology and companies and startups. So there's a lot of conflicting sort of impulses here. And it's kind of shaken out to an environment where

The regulations are burdensome, but they're not totally stifling. And I think it's been now over a year since the regulation debuted. And I think the CCP is actually probably pretty happy with how things are going. So let's talk about the AI industry in China as a whole then. Aside from the generative, what is the landscape of AI in China? Because it's not something that is talked about very much in the West.

Yeah, it's a situation where in many ways, like generative AI and the large model companies have kind of sucked the oxygen out of the room in terms of discussion of many other things. But there are many other aspects of AI that are sort of continuing along. In many ways, if you look back over the past eight years or so to when 2016, 2017, when the National AI Fund came out, there was this period where you had a really sort of explosive growth in the valuations of companies, companies

The sort of the initial ones were around facial recognition with an obvious market for those. The market is the government. The market is the expanding surveillance state. So there's a period of time from, say, 2017 to 2019, 2020, when some of those were extremely highly valued startups. There's sort of, I think, four leading facial recognition startups or computer vision startups more broadly. They were the sort of national champions when it came to AI in China, right?

And then you've had rise of other applications. So you certainly have a lot of work going into robotics because they're very invested in automation of the factories. You have a lot of work going into autonomous vehicles. This is one area where you actually do see very tangible sort of impacts. It's taken a long time, but at this point, Baidu is operating robo-taxi services in a number of cities, essentially, I believe, at the level of full self-driving vehicles.

believe, without even a driver in the car. And something that we see here in the US as well, but in pretty small sort of limited numbers and limited number of cities. And companies like Baidu and some of the other self-driving startups are pushing ahead pretty aggressively. And I think governments have been

willing to be sort of experimental and accommodating for the purposes of stimulating these industries. So you have a lot of stuff going on in these other areas. Generative AI certainly is getting the most attention and probably at this point getting the most investment, but that could all change.

You mentioned that the national AI plan, I wanted to hear from you how big or how centrally they see AI as part of that productive technology that you mentioned, you know, generative AI aside, is it being encompassed as part of an industrial strategy, almost like renewable technology has been? Yeah, the 2017 national AI plan was a very big deal. And it

essentially generated a ton of activity throughout the Chinese bureaucracy and throughout the Chinese economy. When they pick these industries, I think there's sometimes a

mistaken impression that, you know, they create a national AI plan. They said they want to be the world AI leader by 2030. And so they have a very precise roadmap and they tell, you know, we're going to sort of almost like a blueprint for how we are going to build the technology itself. And it very much doesn't operate that way. I think when the Chinese government

industrial policy is operating at its sort of highest level, it's more like just a catalyst for the industry. Essentially, the top leaders signal to the entire bureaucracy, we really care about AI. We want you, you know, bureaucrat. You could be a traffic official. You could be a mayor. You could be a university president. We want you to

do something with AI. And if you do, you'll probably be rewarded on your next sort of evaluation. It's all run through a central, you know, HR department in the CCP. And so when they put out a document like that, they essentially signal to all of these local officials, you know, do something with AI. So you're a university president, you create a

AI institute at your university or you hire some foreign professors to come and deliver courses. If you're a traffic official, you try to do AI to your traffic system. Maybe you buy an Alibaba City Brain, which is their kind of integrated data AI system for managing public services. If you're a police official, public security official, you buy a bunch of

facial recognition cameras. And that really was quite effective in just, it's almost just like pouring gasoline on the fire of what was already existing of an AI industry in China, really sort of accelerated the development. You're creating like a domestic demand. You're creating domestic demand. You're sending the signal to your officials, but then also to the private industry. Then companies, startups know, you know, this is a space where I can

probably get government subsidies. I can probably get free office space in some government tech accelerator somewhere. And so it really sort of drummed up support. I think then we had this period of the national tech crackdown on Alibaba, on a lot of the big platforms that I think somewhat dampened the enthusiasm around the industry. And this coincided also with sort of the period of COVID and the most intense lockdowns in 2022.

But coming out of that again at the sort of end of 2022, we have they're coming out of zero COVID and chat GBT debuts and the economy is kind of going downhill, to be honest. And so you have this trifecta of concerns that leads them to.

re-embrace the technology, re-embrace the industry in a very meaningful way. So prior to ChatGPT, in many ways, the Chinese government had thought, maybe we've caught up with the US in AI. Maybe we're even ahead of the US in certain parts of AI. And the debut of ChatGPT really sort of shook that perception and got them to think, okay, maybe we can't just

crack down on our companies however we want to. Maybe we do actually need to kind of be more accommodating, re-embrace these, re-stimulate our AI industry for the sake of the economy, for the sake of national power, for all these reasons.

Yeah, I was struck when I've been back to China the last year or so by how just prevalent discussion of AI was just in the lobby, seeing, you know, in the public in general, you know, you would see facial recognition everywhere much more than I saw before COVID. And part of that is a COVID legacy. But also, I think it's partly just that people can use facial recognition now, you know, even scanning in to go into your residential compound, you know, you have a key anywhere, you have a facial recognition camera.

totally unnecessary in my British kind of antiquated view, but you know, that's very prevalent. And I had people talk about their privacy concerns, you know, just in the supermarket using facial recognition. And they even went to a university lecture on AI because people were trying to figure out what chat GPT was, even though no one could really kind of legitimately use chat GPT. I just, I was just so struck by the cut through.

as it were. On that question, Matt, of the signaling through the bureaucrats and into the private sector, I mean, firstly, has there been subsidies into the private sector on AI? Yeah, absolutely. And they take a lot of different forms.

Sometimes it's a very direct subsidy. Sometimes it's, you know, free office space. Usually what in a lot of ways what local governments and trying to operate on is real estate. You know, that's been there sort of bread and butter for so long. So they'll often create a new tech park and they'll say, hey, if you found your AI startup in Staten Island,

Xi'an, you get free office space for two years or tax breaks for two years. And it gets even more kind of nuanced than that at some point. So with the generative AI regulation, you now have this need to register with the central government, with the CAC, the Cyberspace Administration of China, in order to get approval. And in the early days of those approvals, it was very hard to get approval. And essentially each province was able to kind of nominate a company to be their representative

submission for national approval. And so you had companies that were choosing to relocate their headquarters to another province on the promise that then they will be the chosen nominee to be the generative AI company from Guizhou province or wherever it is. So we often think of it in a relatively cut and dry financial sense of subsidies, but

But it does, it gets more nuanced than that. It can take the form of tax breaks. It can take the form of direct subsidies. It can also take the form of, yeah, like real estate or helping you hire international talent back to China can take a lot of different forms.

And then Matt, on those signals then from the government itself, I've also come across reporting about a national AI team where different businesses seem to have been designated different specialties to research on AI. Am I understanding it right in the sense that the government seems to have gone to Baidu and Tencent and different companies said, right, you guys are the facial recognition people, you guys are the other people, you guys are autonomous driving, blah, blah, blah. That kind of divvying up makes these private companies seem just unethical.

arms-length bodies of the government. I mean, tell us about that AI national team. What is that? Yeah, so this is a relic of the national AI plan back in 2017. I believe that national team was officially designated in maybe 2018 or maybe 2019. And it can be a little bit deceptive in the sense that they did sort of declare that, yeah, Baidu, you are doing self-driving and iFlytech, you are doing sort of voice and speech. I think Tencent's doing healthcare, right?

And a lot of ways that really just reflected what the companies were already doing and didn't have a ton of actual like substantive impact. Baidu was doing self-driving and they were sort of ahead in that area. iFlytech was the longtime company working on voice and speech technology in AI. And so, yeah,

It was a nice propaganda signal in some way, and I'm sure it does have some impact. Then, you know, if Tencent is the national champion on health care, then when Tencent goes to Shandong province and says, hey, we want to work with your hospitals, then they can point to this national champion designation. It probably makes the local governments and the local hospitals more willing to work with them.

but I would say it wasn't quite so much of a, you know, tasking of you do this, you do that, and now you'll listen. I mean, and it all got scrambled with the sort of large model generative AI stuff. You know, in some way, this should be

maybe iFly techs realm to a certain extent. They had done the most work as it relates to sort of language and voice and speech at least, but it's the companies that were working on this the longest were actually Baidu and some of these startups that came out of Tsinghua like Zhiruopu and Moonshot. You know, there just happened to be researchers who had been working on language models there and had been sort of building up these technical skills and this in some cases actually founding these companies and

And so there is a more, I'd say, at least in the sort of the formation of these companies, a more organic impulse there, or maybe a market-driven impulse or a sort of technical research-driven impulse. But then it does get caught up in politics in the way of

For example, the large model startups, there's a big question of who your customer is. It's very hard to earn real revenue from individual Chinese consumers just willing to pay for access to your language model. And so some companies are choosing to market a lot more to the government, to the government's needs for language models.

Then you do get tangled up and it comes down to the investment, you know, who's investing in these companies. At this point in time, a lot of the private venture capital investment is sort of dried up in China. And so it does end up having more government money or the government in some ways directing companies to invest in each other.

It's a very, very mixed bag. Yeah, no, no, that makes much more sense because as we know, it's a cutthroat world in China's tech industry. The idea that someone like Baidu is just going to leave off an area of AI just because the government told them so is a little bit unrealistic. But that makes so much more sense that it's mainly propaganda campaign, but that also has real legs involved.

I think one major question that anyone interested in topic is whether or not this is the next arms race for China and for the US. I mean, the fact that China is thinking about it in terms of a strategy or for AI team does seem to suggest that this is how China is seeing it. Back to your point about productive technology. And the US's export ban on cutting edge semiconductors to China also partly has AI in mind. So I guess my question to you is how inevitable do you think that is the next arms race?

I think it very much depends sort of what we mean by arms race. There is, you know, one version of it where we're racing to build a nuclear bomb and there's another version of it where we're racing to be the leader in sort of a, an industry that will have like far reaching economic impacts where there's not really a, you know, a finish line. It's more like we are trying to be the lead in this. And, and,

I think depending on sort of which part of the government, which part of either government you speak to or which thought leaders or policy entrepreneurs in some way you're speaking to, they'll focus on different things. There's a contingent here in the US and in the UK with DeepMind and elsewhere that's very invested in the development of artificial general intelligence, which they think is going to be an absolute game changer in the

in humanity's future and thus in national power. And I think that vision of where the technology is going has very different implications from a vision that says, "Hey, this is going to be a productivity enhancer. It's going to ideally support faster total factor productivity in economics terms. It's going to increase the productivity of our economies, and we want to be ahead of China in this regard."

I think if it leans towards that sort of first vision where we're building something that is paradigm shifting, game changing, it's hard to imagine countries not wanting to be the first, not being absolutely committed to sort of being the first in that realm. If it ends up playing out more like a technology like the internet or a technology, some people say like electricity, it's a much more sort of

slow motion marathon in which both countries will be competing. It doesn't might not matter that you're the first to do something. It might matter that you're the best at sort of diffusing that technology out through your economy. One of my friends and colleagues, Jeff Ding, just wrote a book on this, on the sort of the importance of

diffusion of technologies as the real sort of decider and who ends up leading, which contributes the most to national power. And so I think there's a big divide on like, where is this technology going? And those things have very different implications for how the countries are going to approach it. You know, even if you mentioned semiconductor controls,

Even if the countries are or say China is committed deeply to leading or trying to lead in this technology, they're going to face real constraints. And I think in some ways they're more constrained now than they have been in a very long time because of the state of the economy. I mentioned that venture capital has really kind of gone off a cliff in China. Local governments are more cash strapped.

So just on a financial side of things, they're not in the same position they were in 2017, where they could just choose to hugely catalyze their AI industry through this kind of, you know, just splashing a bunch of money around.

They're constrained in that respect. And there's going to be the question of whether or not these chip controls can uniquely constrain them. I think there's a lot of angles to that. I won't go into every part of it, but I do think that over the medium term, at least two to five, two to seven years, these are going to act as a major tax on what China is able to do when it comes to AI. Yeah.

I'm glad you brought up Jeff. Jeff Ding came on the podcast a while ago to talk about facial recognition, back when that first became a massive privacy issue in civil society. And we were discussing that. And I listened to Jeff on a rival China podcast called China Talk, which is very good. So I would recommend that to my listeners as well. This point about diffusion is so interesting, isn't it? Because

Jeff's point, as I understand it, is that China may be the world leader in all sorts of areas of AI research, but if it's not diffused to wider civil society, to consumers, to businesses, maybe even academia, then you're not really incorporating it into your economy. That's at least how I would describe his theory. It makes me wonder if this kind of categorization of China's productive technology and unproductive technology is actually unproductive.

quite unhelpful because when you have more diffusion of these kind of civil ways of using AI, surely you would then tackle issues maybe such as aging population. You know, you might be then talking about productivity gains that in the long run mean that a declining workforce will be less a problem. So it feels to me that

That technological frontier, pushing it out can only be a good thing when it comes to national power. And it's not immediately clear what is going to be productive and what is actually not productive at all.

Yeah, I think that's a very good point and a very fair criticism of the CCP's sort of mentality when it comes to this. They have a rigid and in some ways kind of like old school conception of national power. And they, yeah, they like factories and they maybe don't like just people shopping, consumer goods, that type of thing. They just, I think it's pretty hardwired into Xi Jinping's sort of mentality that that's where power is and

sort of an anti-consumerist bet to it. And there are times when that looks, you know, smart. And sometimes we look at how much in the United States, how much we've invested in just social media and what impact has that had? And was that really the best use of our sort of, you know, limited engineering talent and our limited financial resources?

But when you look over a slightly longer timescale, it gets a lot fuzzier and a lot of things come from unexpected places. A lot of productivity gains come from unexpected places. Well, stuff like TikTok, right? I mean, if China was able to, if the government was able to design its own economy, its own private sector, what it wanted it to focus on, TikTok presumably wouldn't have been the success that it is today. Because companies like ByteDance wouldn't have been given the free reign to do whatever they wanted to.

Yeah. And I mean, ByteDance itself, it's interesting. And a lot of my research recently is focused on how China sort of came to write its AI regulations, what exactly sort of motivated those and how are they shaped. And one of the things I found is that actually a lot of the early motivation for their first regulation, their first regulation was on recommendation algorithms, essentially saying we, you know, you need the recommendation algorithm to sort of tilt towards innovation.

What we like to see, you can't just have it sort of, you know, giving people exactly what they want at all times. It needs to tilt towards the party line. It needs to tilt towards positive energy, stuff like that. And the motivation for that actually came out of a very concerted.

government crackdown on and fight with ByteDance back in 2016, 2017. They really didn't like the rise of not TikTok or Douyin, the Chinese TikTok. It was the earlier app they had called Toutiao, which is essentially a news recommendation app.

And the party was extremely worried about the fact that Toutiao was just giving people the news and the kind of content that they wanted. And it wasn't pushing, you know, these are the top five stories of the day, the way that they've always been able to control with traditional media websites. And so this concern over essentially how is information disseminated, how is information prioritized, led to a big crackdown that

I don't know if it ever put ByteDance as a whole at risk, but it very, very well could hugely damaged the company. And we might not have seen Douyin and TikTok emerge in the way that they did. They sort of survived that regulatory storm and went on to become the ByteDance, the company that we know today. But in the early days, I mean, the government had a very antagonistic relationship with ByteDance. Yeah.

Reminds me of, you know, these kind of internet memes where American TikTok is compared to Chinese Douyin and the content on there. I don't know if you've seen these, Matt. I've missed those memes. Oh, it's quite like a right-wing discourse.

It's the kind of thing you might see on like, I think I literally saw it on Tucker Carlson actually. You know, this is the kind of stuff that's on American TikTok and it's like, you know, people twerking or like throwing milkshakes at each other. And this is the kind of stuff that you see on Chinese TikTok doing, which is like, how quickly can a Chinese child do mental maths? You know, that kind of stuff.

So, you know, I always used to laugh at that. But actually, you know, when you talk about these kind of government attempts to encourage positive energy on these platforms, you can see where early days these kind of things kind of feed into the algorithm that actually the Chinese Communist Party on a socially conservative level has had an influence in that sense, not just politically, but just like social conservatism.

Yeah, yeah, absolutely. Yeah. And I want to talk about regulation because I feel like competition doesn't just have to be on the frontier of research, right? It's also on regulation. Who sets the rules feels like quite an important thing. So I wondered if you can talk a little bit about that. You've mentioned the generative AI stuff. You mentioned the recommendation algorithm. I think there was another one about deep fakes, deep synthesis. Yeah.

But also, you know, just why is it important to also be a front runner when it comes to regulation as well in AI? Yeah, this is a very common thing that you hear in DC especially, but I'd say globally and to a certain extent in China as well, that in DC they often say like, we can't let China write the rules of the road for AI. We have to write the rules of the road. Right. And...

I think there's some truth to that in that you do not want norms that come out of an authoritarian political system to become global norms on this. But as you dig in, it gets a lot more nuanced. And it's not necessarily true that because China has chosen to regulate partially in one way that we want to sort of imitate that. You know, it's almost the...

inverse of that meme that you mentioned on Tucker Carlson. In China, they were very early on regulating algorithms and AI. Their recommendation algorithm comes out in 2021. Their regulation on what they call deep synthesis, but essentially targeting deep fakes comes out in 2022. And their regulation on generative AI comes out in 2023. And the

This came out of an era when the CCP was feeling very, felt like it had a sort of a free reign to crack down on its tech companies. And its first two regulation, recommendation algorithms and deep synthesis were pretty aggressive in terms of making demands and creating some regulatory architecture around it. But over the last decade,

two years, essentially, in this period since ChatGPD came out, they've had to make this pivot towards trying to be more accommodating towards their companies. There's been, I'd say, a pretty active push to not regulate as much as they were. If they continued on their early trajectory, they were talking about creating a national AI law. There's a draft facial recognition regulation out there that's been floating around for a while. But in many ways, the CCP has rolled back

on its regulatory ambitions for fear of stifling its industry. And you have a similar debate here in the US. We're always afraid of falling behind China, so we don't want to regulate too much. But I think maybe the most helpful way to look at regulation between the two countries is

it's not so much direct competition in the sense that it's not that India is going to just adopt the Chinese regulations on recommendation algorithms. India has blocked a lot of Chinese apps. They have a very conflictual relationship there. And it doesn't make sense in that context.

But there is actually an opportunity to learn from each other, not in terms of the content of the regulations and the goals of the regulations, but the specific mechanisms. Like China has created a pretty heavy handed in some ways, but also quite finely tuned architecture for testing and evaluating algorithms for their goals. Now, their goals are different.

political control. And we can essentially remove that content from what we learn, but still learn from the structures. They have this structure of registering what they consider to be sort of high risk algorithms and running them through these tests of 2000 questions on this and 2000 questions on that.

And we're not going to adopt that system here in the US. And that might be a very bad system, but we do have the opportunity to essentially see how it plays out. Can you actually make these demands of companies and can they meet those demands? Does a system of registration work well? We're getting to watch like a full natural experiment in this play out in the other country. And China is doing the same thing when they look at us. They're very, despite the ideological differences, they're very willing and eager to

to learn from what the U.S. is doing on AI regulation. So when the White House came out with an executive order on AI almost a year ago, China studied that very closely. And some of the ideas from that executive order have

been sort of directly made their way into Chinese policy proposals around AI. They don't see a conflict necessarily between the fact that this policy idea is coming from a different ideological world and putting it to their own use. I think that's something that we could

essentially learn from here in the US. China's rolling out a lot of stuff that we sort of fundamentally disagree with in terms of the content of the regulations. But we do have a chance to observe these structures and potentially learn from the structures and try to apply them for our own goals. Mm-hmm.

Matt, you know, talking about regulations reminds me of, of course, that one of the big fears about AI is the existential risks that come with it. So, I mean, and that's why regulations are necessary and it's important to acquire AI safety. So I guess kind of linking back to my previous question about this kind of idea of an arms race.

If they are competing, and let's say it's not existential to each other anyway, but if they are competing to have the best tech possible and give their companies the best freedom and leeway and funding, whatever it is, does that also put the world in a dangerous place when it comes to pushing out that kind of frontier AI and the existential risks that come with, for example, AGI and that kind of stuff?

We don't necessarily want the two countries to be racing ahead on this. I don't know. Maybe we do. No, I mean, I fully agree that that can create an extremely dangerous situation. Obviously, the scientific community is hugely divided on whether or not frontier AI, artificial general intelligence, could create existential risks or catastrophic risks for humanity. It's like not a settled question. But I think most people would agree that if you're going to have a world where the US and China are

very sort of close in their capabilities and they're both pushing ahead and they both see, are constantly afraid of falling a little bit behind the other one. And, you know, that could drive them to sort of ignore certain safety protocols or not go through the more rigorous safety testing and evaluation that you'd hope that they would take.

The question is sort of, you know, if we have that concern, what do we do about it? Do we try to negotiate an agreement where we're both going to say, okay, we promised not to develop this or that. We promised to slow things down. That's a little bit far-fetched, at least at this point in time, given the deep, deep levels of distrust.

There's another approach that says, you know, our main objective should just be creating the widest possible gap between the US and China. I'd say this is essentially the mainstream view in DC, that if you're worried about an arms race, the best way to avoid that is to be so far ahead that it's not really a race, you know, that we have the buffer and the time to do things safely. And I think maybe there's a third way that in some ways sort of blends the two. And this is what I tend to be

be most interested in is, yes, we should be doing everything we can to stay as far ahead of China as we possibly can. And China is going to be doing everything it can to catch up

But as we do this, it's possible to be in conversation essentially about the risks and about what are we doing to mitigate them. From a technical perspective, what do we see as promising ways to mitigate the risks that we lose control over AI systems? This is not going to be sort of a safety by agreement. I'm not putting all my weight, all my sort of eggs in the basket of this handshake agreement between president and

X in the US and Xi in China is I think just the two sides. There's too much distrust. But if you have kind of much lower level contacts between the two ecosystems where policymakers in the two ecosystems, researchers in the two ecosystems are having conversations about what are we doing? What are you doing? What do you find to be effective? How are you perceiving these risks? That can feed back into

One being building more effective safety mechanisms in both systems. I mentioned that China is very willing to learn from and adopt policy ideas from the US and elsewhere. And that includes policy ideas around safety, around both the technical aspects of safety and the policy side of safety. So

I see it as more, if we get there, it's going to be sort of safety in parallel. Both the US and China are pushing as hard as they can on development, but they're also in parallel taking their own mitigation measures to try to ensure the safety of these systems. And they're comparing notes periodically on how that's going. I like the sound of that very much. But Matt, I mean, these low-level contacts...

it's only going to get more difficult in the short term, isn't it? When it comes to researcher to researcher discussion, you know, I came across this stat, I think it was from the think tank Macropolo, which has a global AI talent tracker that actually more and more in the last few years, AI researchers are staying in the country of their origin, especially when it comes to Chinese AI researchers. And that's a political climate thing, I guess. Tell us about that because it's,

There's so much suspicion, I feel, in the West when it comes to Chinese ethnic scientists or Chinese scientists who've studied in China, then study abroad, and then maybe stay abroad. It's almost very 20th century when it comes to that kind of suspicion again. Yeah, the Global AI Talent Tracker. I built an early version of that at MacroPolar back when I was there. Jeff Ding actually worked on the very first version. I'm very familiar with that data set.

And yeah, I mean, broadly speaking, if you go back to 2017, 2018, there was just incredible flow of people, flow of these researchers sort of across national boundaries. And there was very intensive cooperation of researchers across national boundaries. So-

The US and China were by far the sort of leading source of co-authored articles across national lines. I think the US and China cooperated or co-authored more AI research papers than I think the US and every other country combined. It was by far the largest source of that type of international research cooperation.

And a lot of that has been dampened. You know, it's been dampened by the political climate, by sort of hard visa restrictions, by fears, real fears on both sides about what happens if I'm in that country, what happens if I work with someone from that country. And it was hugely dampened by COVID. And I think we might see some recovery of this after COVID. I believe the latest macropole numbers are from 2022 when the code restrictions were still in place.

But there's been just like a fundamental sort of baseline change here. I think whereas in the past, this was a very kind of not unregulated, but it was just not very directed, not very concerned with the political aspects of this, this cooperation between researchers. It was very free flowing. It was you're working on something. I'm working on something. Let's work on it together. Yeah.

I think when I'm thinking about these type of researcher and policymaker engagement, especially when it comes to safety, these are becoming more targeted and directed through sort of more official channels. So recently there was a new version of this set of dialogues called the International Dialogues on AI Safety that get together some of the most elite Chinese scientists and some of the most elite scientists from the West, you know, the founders of deep learning, the sort of

the godfathers of modern AI in the West and their counterparts in China, like Andrew Yao, Zhang Yaxin, a bunch of other pretty respected researchers, to have sort of focused conversations about AI safety and conversations that are

in many ways more aware and directed towards the geopolitical aspect of this. Whereas before it was purely, let's work together for our own purposes. Nowadays, you have to be much smarter about it, even when it comes to safety of AI systems. It's not that you can just have any conversation that you want with Chinese research. A lot of the measures that are designed to improve the safety of the systems also enhance the capability of those systems.

And so it's going to take much more directed, smart, strategic engagement between the two sides. But this is happening at different levels. We usually call it track two dialogues, where track one is sort of government to government, track two is non-government to not government. But I think those dialogues can form a very important part of the sort of US-China AI relationship going forward and can contribute to sort of the safety and stability around this in the future. Yeah.

Matt Sheehan, thank you so much for coming on to Chinese Whispers and I hope you're right. Thanks very much for having me. Thank you for listening to this episode of Chinese Whispers. I hope you enjoyed it. If you're listening to this podcast on the Best of the Spectator channel, remember that Chinese Whispers has its own channel as well. If you just search Chinese Whispers, wherever you get your podcasts from, you will always get the latest episode first there. If you're listening to this podcast on the Best of the Spectator channel, remember that Chinese Whispers has its own channel as well. If you just search Chinese Whispers, wherever you get your podcasts from, you will always get the latest episode first there. If you're listening to this podcast on the Best of the Spectator channel, remember that Chinese Whispers has its own channel as well. If you just search Chinese Whispers, wherever you get your podcasts from, you will always get the latest episode first there.

If you have any feedback, positive or negative, but preferably constructive, please do email me at podcast at spectator.co.uk. And I'd also love it if you left a review or told your family and friends about the podcast. It's a way to help us grow. So thanks so much for listening and join us again next time.