cover of episode Predicting AI Policy in the Second Trump Administration

Predicting AI Policy in the Second Trump Administration

2024/11/7
logo of podcast The Truth of the Matter

The Truth of the Matter

People
A
Andrew Schwartz
G
Gregory C. Allen
Topics
Andrew Schwartz认为特朗普政府对人工智能的监管并非从零开始,而是基于其自身在第一任期内制定的政策基础。他分析了特朗普政府和拜登政府在人工智能政策方面的共性,例如投资人工智能研发、改善劳动力队伍和国际合作等,并指出拜登政府的人工智能行政命令与特朗普政府的行政命令在许多方面存在共通之处。他还提到,在2024年竞选活动中,人工智能成为一个党派问题,共和党批评拜登政府的人工智能政策,并计划在第一天就废除拜登政府的行政命令。 Gregory C. Allen则认为特朗普政府废除拜登政府行政命令后,其替代方案可能与被废除的政策非常相似,也可能完全不同。他分析了拜登政府对大型多功能人工智能模型的监管相对宽松,主要侧重于透明度要求,并指出特朗普政府的目标可能是转向更少监管的模式。他还讨论了特朗普政府可能在一定程度上保留拜登政府的国家安全备忘录中的内容,并预测特朗普政府可能保留国家安全备忘录中与能源政策相关的部分,例如简化审批流程,以促进能源和人工智能数据中心建设,但可能不会保留国家安全备忘录中关于移民政策的部分。此外,他还分析了人工智能安全研究所的未来,以及特朗普政府对人工智能芯片出口管制、开源人工智能模型、与台湾的关系、反垄断行动和能源政策等问题的可能态度。

Deep Dive

Chapters
This chapter explores the AI policies of Trump's first term, comparing them to Biden's approach, noting commonalities and highlighting the shift towards a partisan issue in the 2024 campaign. The bipartisan nature of AI policy during the early Biden administration is also discussed.
  • Trump administration's two executive orders on AI
  • Commonalities with Biden's AI executive order
  • Shift from bipartisan to partisan issue on AI policy

Shownotes Transcript

Translations:
中文

I'm Andrew Schwartz, and you're listening to The Truth of the Matter, a podcast by CSIS where we break down the top policy issues of the day and talk with the people that can help us best understand what's really going on.

This is a great crossover episode to be having with our AI policy podcast here on Truth of the Matter. We're going to be talking about AI policy under the incoming Trump administration from none other than Gregory C. Allen, my colleague who's the head of the Wadwani AI Center. No one knows this stuff more than Greg. This is going to be a great podcast. Listen here.

Welcome back to the AI Policy Podcast. We're talking on November 7th, 2024, just a couple of days after our presidential election. I have with us Gregory C. Allen, my colleague who we do this podcast together. And, you know, Greg, this is a great time to be talking about what AI policy might look like in the incoming Trump administration. But first, I think we should dive into what he did as president prior. So let's get started.

Let me ask you, how did the first Trump administration approach AI regulation? Yeah, I think this is the right place to start because the Trump administration is not starting from scratch. They're starting from a precedent that they themselves set as making a big push on AI policy throughout the first Trump administration. So the Biden administration has an AI executive order. It's getting a lot of attention. It has gotten a lot of attention over the past year. The Trump administration had two executive orders.

And what's interesting is that while there's definitely a difference in degree and to some extent a difference in kind, there's also a lot of commonality with what was in the Biden administration executive order. So things like investing in AI research and development.

unleashing AI resources and trying to make AI data more available to experts, researchers, and industries, setting AI governance standards, improving the workforce, engaging in international engagement, finding a way to work with allies and partners on standards and collaborative research. That's all stuff that you can point to, to some greater or lesser extent, also in the Biden administration's AI executive order. And some of the stuff that's going on now, like

The NIST AI risk management framework, which has a great reputation among academia, industry, internationally, that stuff was finished during the Biden administration, but it was launched during the Trump administration. And even AI safety, which we'll talk about more here, that is an area where the Trump administration's AI executive orders talk about prioritizing safe, secure, transparent AI and

very frequently. So this is what I mean when I say it's interesting because some individuals from the Trump White House on the civil servant side of the equation continued to serve in the Biden administration. And for a while, AI was a bipartisan issue. But is there a partisan split now? Yeah, I think even for most of the Biden administration, AI was a pretty bipartisan issue. You know, you think about, for example,

Senate Majority Leader Chuck Schumer, when he came here, that was as part of a gang of four senators from both Republican and Democratic Party. When he was here at CSIS. Exactly. Giving his major AI speech about how to educate the Congress on AI. Yep. And that was a bipartisan initiative. But now you fast forward to the 2024 campaign, it is a partisan issue. And

the Republican Party was specifically criticizing the Biden administration's approach to AI policy. And they were criticizing Vice President Harris, who at various points was described as the Biden administration's AI policy czar. So it was really something that they tried to knock her on.

particularly the executive order. So I think that takes us now to we talked about what the Trump administration did in the 2017 to 2021 era. Now let's talk about what the Trump campaign of 2024 talked about. So there's one line about artificial intelligence in the Republican Party platform. And the key quote here is that it says it's going to repeal Joe Biden's dangerous executive order that

hinders AI innovation and imposes radical left-wing ideas on the development of this technology. And in December 2023, Trump said not only is he going to repeal the executive order, but that's a day one priority. So the Biden administration executive order, actually the longest executive order in the history of the United States of America, the Trump administration was expected to repeal that

So first day, we're going to see him signing, as we did years ago, a big folder that says, this is gone. Yes, yes. And I think the question then becomes, what replaces that?

Because as I said, there was a lot of stuff in the Biden administration executive order that you could point to, the precedents that were set in the first Trump administration. So there's times when you say, this is all repealed and here's what replaces it. And some of what replaces it is pretty close to what was repealed. And there's other times where you say, literally, the whole thing is gone. And I think that's actually the open question here. Whether it's all gone or whether this is... Whether they pick the parts they like. I mean, that's the question I have. Is this just rebranding?

I don't think so. I do think there is a goal here about shifting away from a regulatory approach. And I think a lot has been made about the Biden administration's regulatory approach on AI, but it actually was more...

more light touch than you might think. So for example, when it comes to large diversified AI models, the type things like, you know, chat GPT and its competitors that can perform in a lot of different sectors of the economy, the only mandatory requirements were transparency requirements. When you conducted those safety tests, you had to share that data with the government. The

There was nothing to the effect of you had to run these types of safety tests or else face fines. Or if you used it in such and such a way, you're going to be put in jail. That type of stuff wasn't in the executive order. The compulsory stuff was really around transparency. And almost everything else was about voluntary standard setting, public private partnerships. And that stuff was all in the Trump executive orders, too.

So let's look forward to this future of AI policy under Trump. The Biden administration did release its national security memorandum, which on October 24th, which we just talked about on our last episode. To what degree do you think this document will be implemented at all under the Trump administration? Yeah, I think here there's more that is likely to survive.

Because there's a lot in that national security memorandum that is actually not just consistent with prior Trump administration actions, but consistent with Trump campaign pledges. So, for example, President Trump, while he was the candidate, said multiple times whenever he was asked about AI, he often pivoted to energy policy. Well, the AI national security memorandum directs the White House chief of staff and the Department of Energy and the other parts of the interagency process to ask

come up with solutions for how they can reform permitting requirements or identify other blockers to a massive energy build-out and a massive AI data center build-out. I mean, that's like the exact sort of thing I can imagine in a Trump administration day one executive order. There's other parts of that national security memorandum, as you and I talked about last time, like reforming immigration requirements for high-skill AI talent. There was actually some of that in the first Trump administration, but...

But everything related to immigration, I think, is going to be a touchy subject right now. So it wouldn't surprise me if that's like one of the ones that got thrown out.

And then finally, there's another part of the executive order that deals with the regulations on the government and the types of companies that work with the government related to bias, civil liberties, discrimination. And I can imagine a lot of that stuff getting thrown out with the parts of the Republican Party platform that were anti-DEI and related initiatives. Sure, sure. Let's talk about safety for a minute. Earlier this year, the Biden administration stood up the first AI Safety Institute. We talked about that also.

on this podcast. The Institute was heavily featured in the National Security Memorandum. You know, to what degree, Greg, do you expect the Trump administration to continue the Institute's work? Or is it something, I mean, I know they take AI safety seriously. There's no question about that. But, you know, in the past, President Trump as president has redone a lot of things. So what are we looking at here?

So I would say the future of the AI Safety Institute is by no means assured. But I would also say that being gotten rid of is also not assured. This is very much in debate. And it sort of relates to who are going to be the primary advisors to President Trump on his AI policy issues. So just take one person, for example, Elon Musk, extraordinarily.

extremely influential during the presidential campaign. Now, at this point, I think it's safe to say a close confidant of President Trump. Well, Elon Musk has a lot of opinions on AI policy. Yeah, and he knows a lot about it, obviously. Exactly. And has, you know, had strong opinions for a really long time. So just to think about a few things here, he was one of the founders of

open AI and was involved in recruiting some of the key early officials of open AI. Well, the early days of open AI were all about AI safety. That was literally the argument for creating the organization. Elon Musk publicly called for regulatory oversight of AI and did so on a human extinction grounds. I remember this when it happened.

He said, quote, with artificial intelligence, we are summoning the demon, like literally saying, you know, this is a technology that if we're not careful could lead to the extinction of humanity. So and if you're someone like him who's trying to repopulate the world, that's no good.

Well, exactly. And you got to remember, you know, his his justification for SpaceX making life multi planetary is all sort of rooted in this obsession about making sure, in his words, the light of human consciousness extends forever. So he really cares about these existential risk concerns. He has been promoting the work of Nick Bostrom, who's the philosopher who wrote a book called Superintelligence, which is all about how.

AI could kill us all. And more recently, I mean, like this is a pretty consistent policy preference. When California had that AI safety bill for the California legislature that was recently vetoed by Governor Gavin Newsom, Elon Musk was outflanked.

front among leading technologists and saying he supported the bill. He was willing to vote for that. So that's like one very important member of the Trump policy community who really is open to AI safety. And then if you think about like what the AI safety institute is actually doing, it's almost entirely voluntary, right? Like the

The leading tech companies say, we want a government sort of impartial arbiter where we can collaborate on AI safety research and disseminate that amongst ourselves, where we can get authoritative guidance that will reduce the risk that we face from lawsuits or from other types of things. And that will also help customers, whether that's in business or among consumers, to

have confidence in the safety of our products. So a lot of this demand is really voluntary and coming from the private sector. So while there is a faction in

Congress right now that is really skeptical about the AI Safety Institute. There was also a group of 60 companies, some of whom are really prominently aligned with Republican Party politics, who are in favor of the AI Safety Institute and its continuation. So I would say this is an active fight. There's people in the Trump AI policy community who are against it, and there's others who are in favor of keeping it. All right.

All right. So that's one to watch. And there'll be points scored on either side, I'm assuming. Let's go on to AI chip export controls. Stricter export controls on AI chips have been one of the biggest priorities of the Biden administration. It's a signature thing. Do you expect the Trump administration to continue these? So here again, basically,

This is something that has roots in the first Trump administration. They were kind of the ones who got the ball rolling on semiconductor export controls to China with the ZTE. Competing with China. Exactly. With the ZTE controls in 2018, with the Huawei controls in 2019, with the SMIC controls in 2020, and then with the EUV lithography, the Advanced Semiconductor Manufacturing Equipment controls in 2019.

December of 2020. So these are all the sort of origin story of the government getting into the business of really focusing on semiconductors and export controls as the locus of technology competition with China. The Biden administration's actions, while unambiguously, they're more significant, you know, they were more substantial policy than what the Trump administration was experimenting with.

They were definitely a continuation of the approach that the Trump administration pioneered. And at the same time, most of the criticism of the Biden administration coming from at least Republican leaders in Congress was that it wasn't strong enough, that they didn't go far enough, that they weren't enforcing the rules that they had strictly enough.

And so it's tough to see why the Trump administration wouldn't want to continue this approach. There's a couple caveats there. Number one is President Trump himself as an individual, as opposed to his administration, you know, was reportedly interested in semiconductor export controls as a bargaining chip in a larger deal on trade. That is how the ZTE export controls were ultimately abandoned, you know, before they made the biggest impact back in twenty eighteen.

So one could imagine that Trump, just as he was in the first administration, he's going to put this massive tariff on Chinese goods. He said that day one, he wants a 60% tariff on Chinese goods. Well, that could be just what he wants forever, or that could be part of a larger strategy to renegotiate trade with China. And I could imagine semiconductors and the AI chip export controls being a part of that

debate. All right. So that might be an area where he agrees with President Biden, but he'll just incorporate it into the larger picture of his tariffs. Yeah. I mean, a colleague of mine, Divyansh Kashuk, he said recently, you know, Trump is predictably unpredictable. And so while you can point to all these things that might lead you to conclude he will act one way, you sort of have to always have in your back of the mind, he reserves the right to do a 180 on you.

Sure. All right. Let's talk about open source. Whether powerful AI models should be open source is one of the really fierce AI policy debates of our time. What would you expect the future of model weight regulation to be in a Trump administration? I think it's worth contextualizing this in the context of

where we are in the open source AI policy debate. From a corporate side, there's really two communities that are backing open source AI. The first is Meta, which produces the most widely used open source AI models with its Lama set of tools. And then you also have the venture capital community who really see that for their startup,

that are not necessarily in a position to create something that's directly competitive with Lama, well, they can build on top of Lama in a way that they can't necessarily build on top of ChatGPT or whatever their business use case or business model is. So the open source community has backers like Mark Andreessen of Andreessen Horowitz, who was also a big donor to the Trump campaign, was also a big source of policy proposals in Republican policy circles on this topic.

And then at the same time, what's really interesting is that Donald Trump feuded with Mark Zuckerberg and Meta relentlessly in his first administration. They launched, I believe it was an antitrust investigation into Meta that debated breaking up the company. So what's interesting is that the incoming Trump administration, at least in terms of what he was saying in his campaign, very, very pro-open source.

But in terms of the company that's really responsible for most of the movement in open source, they have a lot of issues with that company. And I'm not sure how that's really going to shake out. But I would say that in general, I would expect support for open source in general to continue. So just as an aside, you'd rather...

be Elon Musk right now than Mark Zuckerberg with the current new administration coming in. I think it's pretty unambiguous that, you know, the criteria that you're assessing it on is how do I expect the administration to think about, you know, my policy preferences? Elon Musk is in a pretty privileged position. Right, right. Let's move on to Taiwan. As more advanced AI chips are made by the Taiwan Semiconductor Manufacturing Company, what we all know to be TSMC,

Our future relationship with Taiwan is key to understanding the trajectory of AI policy. So how do you expect the Trump administration would handle the Taiwan relationship? Sure. So I think it's worth pointing out here that officially for a long time, the United States has had a policy of strategic integration.

ambiguity on Taiwan. Which a lot of people feel is very ambiguous and don't know what it means. Exactly. And so the question is, would we defend Taiwan if China were to invade under what circumstances, et cetera, et cetera. President Biden sort of shook that up by saying on four separate occasions, yes, we would use American military power to defend Taiwan in the event of a

Not so ambiguous. Not so ambiguous. And then you bring in President Trump, who at least reportedly said this is not something he's confirmed publicly, but was reported to have said there's very little that we can do to defend Taiwan in the event of a Chinese invasion. You know, why would we even try? So that's one part of the equation. The other part of the equation is that he has said on the record that.

Something to the effect of Taiwan stole our chip business and that they need to pay for their own defense. So he's sort of saying, hey, you stole our chip industry. And now because that chip industry is so important to our economy, you're sort of holding it hostage to make us defend you. Right.

This is a very, very accusatory posture towards Taiwan. Let's unpack that for a second. So that's not actually accurate, is it? Because didn't we more cede our chip business to them rather than have them steal it? Yeah, I mean, there's no evidence of the type of widespread intellectual property theft. Right.

that we use when we say, you know, China stole our technology. Correct. This was, and in fact, when we talk about the early days of moving parts of the semiconductor manufacturing industry to China, that was actually with the support of the U.S. national security community. They noticed that

When it comes to Asia, peasant farmers are really vulnerable, especially when they're starving, to communist propaganda. And we're like, okay, we need to help Taiwan create jobs. We need to show them that capitalism works.

And moving out the sort of low skill labor intensive parts of the semiconductor manufacturing equipment, that was not just a increase the competitiveness of the American semiconductor industry strategy. That was a attack communism strategy as part of that's the larger context. That's the larger context. It's not nearly the sort of IP theft kind of a story that it is. That said, Trump used the word stole. So he is he is not and presumably believes it presumably believes it.

So what he likes is moving TSMC manufacturing plants to America. What he does not like is the fact that Taiwan has a really weak military.

Even compared to the size of its budget now, even if Taiwan was to spend 100 percent of its GDP on its military, they're still going to have really, really big problems vis-a-vis China. Of course. That said, they they underinvest in defense. And Trump views that as them sort of extorting us with this semiconductor relationship. So I'm sure he's going to be pretty hard on.

on Taiwan in negotiations. And given that all these big AI companies are absolutely dependent upon Taiwan for advanced chip manufacturing, at least for right now, that's going to be a big policy issue. That's really interesting. So that's another one we're going to watch really closely and be talking about on this podcast.

Let's move on to antitrust. Cutting-edge AI models are developed by a small number of companies, as we know. As antitrust action against big tech companies becomes more common, how do you think a Trump administration might approach regulating the small number of leading AI developers like OpenAI or Anthropic? Yes. And this is something where it's so confusing. It's really, really hard to predict the future. As we just mentioned—

The Trump administration, the first time around, launched antitrust investigations into some of these same companies. And Lena Kahn, the current commissioner of the Federal Trade Commission, has been a

real driver of a renewed emphasis on antitrust enforcement in the United States, specifically targeted at the tech sector. And let's just say for a second, one of her biggest allies in the United States Congress is J.D. Vance. Bingo. That's exactly what I was going to say. There's a quote he said from February.

that I think is so interesting. Here's the quote. One of the few people in the Biden administration that I think is doing a pretty good job. Yep. That's what he said about Lena Kahn. And that goes a long way. They're both Yale lawyers, Yale trained lawyers. There's a real relationship between the two and Josh Hawley for that matter as well. Yeah. And again, you know, the Marc Andreessen community makes a distinction between big tech and

And Little Tech, Little Tech talking more about the venture capital community. And they're sort of making the argument that these entrenched monopolies in J.D. Vance's mindset, that's companies like Google, are making it impossible for new startups and new generation of companies to really grow to that same scale. And again, in J.D. Vance's words, because of violations of antitrust policy. Here's the flip side of that, though. Elon Musk just tweeted that Lena Kahn will be fired soon.

Wow. So there you've got this sort of within the Trump policy community, you've got different views on this. So if I had to guess what I think is going to happen, I would say that somebody who shares Lena Kahn's skepticism about the fairness of competition among big tech companies is probably going to be in the FTC seat. I kind of doubt it's going to be Lena Kahn herself. Right, right.

Another one to watch, Greg, and that's kind of an interesting one, isn't it? Yeah. Energy. We've got to talk about energy. Yes. The energy infrastructure requirements for training AI models are incredibly significant, and policymakers are looking for sustainable energy policy solutions. We're talking about this a lot at CSIS. Both you and Joseph Meikert are Energy and Climate Change Program Director.

How do you expect a Trump administration to approach this challenge? So I think it's worth saying again that every time

President Trump on the campaign trail has been asked about AI. He almost immediately starts talking about energy. So I think this is something that really animates him in terms of, you know, what he's thinking about for priorities for AI policy. And as we said, you know, these systems are monstrously energy intensive. OpenAI's GPT-4 is estimated to have consumed 62 gigawatt hours of electricity during the training phase. So that's like, you know, a 30 megawatt powerhouse

power station operating only on AI for months to train this stuff. And we talked about this on the last podcast. There's serious people in this country who are talking about building tens or hundreds of gigawatts

of power generation capacity really just for these AI models. Yeah, I thought that was one of the most interesting things we discussed on our last episode, to be sure. Exactly. And so Donald Trump has said he wants to like double or triple the type of energy generation that we have right now. Go big. Really focused on AI stuff. So I think that's mostly going to be on a deregulatory agenda perspective, like identify what is...

everything that you have to do that makes it so that once you've decided to build a power station, you're still like 12 to 24 months away from even putting a shovel in the ground. Let's find all that permitting stuff. Let's find all that regulatory stuff. How can we shorten all of that? How can we make it a lot cheaper to do all of that? And I think that's really where their priority is going to be on the deregulation. I'm not expecting massive government subsidies to build all of that, but they kind of don't need them. I

that they're looking at like $200 billion of capital expenditure on data centers and energy generation capacity. Yeah. And so when it's there, when it's there, when they're paying the bill, who is any administration to tell them not to, I guess, is the way they're looking at it. And then I think this also comes back to what is the role for U.S. allies and partners. If you think about a

countries such as the United Arab Emirates, they have been pushing the Biden administration to allow them to execute this deal they had with Microsoft, where they're going to build a lot of electrical generation capacity. They're going to build a lot of data centers. And then Microsoft is going to lease and operate out of those data centers. Well, I think J.D. Vance and Donald Trump are going to ask very tough questions about why wouldn't you build it here? Yeah.

And is it secure over there? Exactly. I think they very much have an American industrial policy mindset. And I think deregulation and tax credits are probably going to be the two big tools in the toolbox for pursuing that. And AI is on the agenda. And why wouldn't we build it here? It's possible, isn't it? Well, just the cost of electrical generation is much cheaper in the UAE. You know, the dollars per megawatt, much, much lower. And then the other thing is like everything we were just talking about, about how long it takes to build.

You can build a lot faster over there. Yeah. So if you're like a company who's like, how do I get to gigawatt scale as fast as possible? There are countries that have a demonstrated ability and willingness to move fast. And so it's not like they don't want to build in the United States. The United States definitely has more data centers than the rest of the world. But.

If you're talking about a massive expansion, there's other countries that at least with the current regulatory landscape, the current cost structure landscape, it's easier to do it abroad. But I think, as I said, the Trump administration is going to ask very tough questions about why not do that here and what can we do to make it so that you do it here. Yeah. And I think that's going to be a big topic of our podcast going forward is, you know, how do we bring this to our shores? And if it's going to cost more, do the benefits outweigh those costs? Like,

many more jobs for Americans? Do we have enough people who know how to do this stuff in America, et cetera, et cetera. So I think that's one we're going to keep talking about. Yeah. And just to give you some anecdotes that have

raised to the level of public awareness. You know, Meta recently had their plans for building a large power plant and a large data center blocked by, I believe it was the Endangered Species Act because of this rare species of bee. That's right. And I just can't imagine the Trump administration having tolerance for anything like that. I mean, Elon Musk in particular has been jumping up and down talking about how the environmental protections that we have make it impossible to build anything in America.

So I expect the administration to go hard after all of that. Greg, this is a great tour of what we might be able to expect and certainly about the issues we're going to be talking about going forward. So thank you so much. Yes, with the caveat again that Trump is predictably unpredictable. So we'll see if any of this comes true. You got it.

If you enjoyed this podcast, check out our larger suite of CSIS podcasts from Into Africa, The Asia Chessboard, China Power, AIDS 2020, The Trade Guys, Smart Women, Smart Power, and more. You can listen to them all on major streaming platforms like iTunes and Spotify. Visit csis.org slash podcasts to see our full catalog.