Skepticism can lead to a retreat into echo chambers and a lack of shared authority among different groups, which undermines democratic dialogue. It also facilitates lying by making people skeptical of even true information.
Deepfakes were prevalent but had limited direct impact on voter behavior. However, they contributed to a broader problem of politicians disclaiming true information as fake, which eroded trust in legitimate information.
AI turbocharges social media by enabling more sophisticated bots, propagating synthetic media, and influencing what users see. This can lead to anonymous communication, making it harder to discern between human and machine-generated content.
Concentrated AI power could lead to monopolistic control over speech and information, similar to concerns about social media monopolies. However, the rise of open models offers a potential counterbalance by allowing for greater competition and access.
Open models allow for greater competition and access, particularly in under-resourced nations, but they also pose risks such as misuse by bad actors, including the creation of harmful content like child pornography.
The European AI Act is a first mover in AI regulation, focusing on high-risk applications like facial recognition and discrimination. However, it initially excluded generative AI, which has since forced a reevaluation of its scope.
Competition between nations and companies drives rapid AI development, but it also raises concerns about regulatory lag and the potential for authoritarian governments to dominate the AI space, which could be detrimental to democratic values.
AI could assist in tasks like drafting legal documents and handling mass adjudication, particularly in areas with large backlogs of cases. However, human oversight will still be necessary to validate AI-generated decisions and prevent errors.
Hi, everyone. It's Russ Altman here from the Future of Everything. We're starting our new Q&A segment on the podcast. At the end of an episode, I'll be answering a few questions that come in from viewers and listeners like you.
If you have a question, send it our way either in writing or as a voice memo, and it may be featured in an upcoming episode. Please introduce yourself, tell us where you're from, and give us your question. You can send the questions to thefutureofeverythingatstanford.edu. The future of everything, all one word, no spaces, no caps, no nothing, at stanford.edu.
S-T-A-N-F-O-R-D dot E-D-U. Thanks very much. AI accentuates the abilities of all good and bad actors in the system to achieve all the same goals they've always had. And that's true for democracy and elections, just as it is for the economy or other aspects of social interaction. And so in sort of every little nook and cranny of our democracy, AI is going to peek its head. ♪
This is Stanford Engineering's The Future of Everything, and I'm your host, Russ Altman. If you're enjoying the show or if it's helped you in any way, please consider sharing it with friends, colleagues, and family. Personal recommendations are the best way to spread word about the podcast.
Today, Nate Persily from Stanford University will tell us that AI has implications for democracy itself because of its effect on communications, deep fakes in elections, and trust in the truth. It's the future of AI and democracy. Before we get started, a reminder to please tell your friends, family, and colleagues about the show. It's one of the best ways to help us grow the podcast.
When you think about AI, you might not be thinking about its impact on democracy, but you know, AI is starting to infiltrate itself, good or bad, in all aspects of life, in healthcare, in finance, in government, in communications, in journalism, and in the law. And so it shouldn't be a surprise that the uses of AI and even the very existence of AI has implications for our democratic processes.
For example, deep fakes. People are worried that we're going to see in future elections lots of fake synthesized media with people saying things that they didn't really say or believe or people doing things that they really didn't do or contemplate doing and that this will affect the electorate. Indeed, in the global elections across many countries in the last couple of years, there's been many, many examples of deep fakes. The good news was
we're going to learn is that they don't seem to have radically affected most elections. People can figure out what they're looking at for the most part. But there's something more insidious, which is that this may make it harder to know when you're dealing with the truth, when you're dealing with falsehoods. And more importantly, it allows people with bad motivations to say, no, no, no, no, I didn't say that. That's a deep fake.
That's AI. I didn't do that. That's AI. This is a way for liars to reap benefits from the uncertainty posed by the existence of AI itself. Well, Nate Persily is a professor of law at Stanford University and an expert at elections, democracy, and the interactions of those with technologies like AI and social media.
Nate, recently you've been focusing on AI and democracy. So I think my first question is, what does AI have to do with democracy? Well, AI is relevant to all social phenomena. And so democracy can't escape its impact. And so, uh,
AI accentuates the abilities of all good and bad actors in the system to achieve all the same goals they've always had. And that's true for democracy and elections, just as it is for the economy or other aspects of social interaction. And so in sort of every little nook and cranny of our democracy, AI is going to peak its head.
Gotcha. Okay, so one of the things that everybody was worried about, we just completed some elections was deep fakes, and that deep fakes would lead people to believe that certain people said something or didn't say something. What actually happened at the last election? Were deep fakes a big factor?
So there were millions of examples of synthetic media or deep fakes. That's true in the U.S. and around the world. Let's remember that this was not just the U.S. election year, but it was kind of the World Cup of elections where we had over 4 billion people who were voting. And so we saw lots and lots of deep fakes and synthetic media, but we didn't see many that had any impact.
And there's sort of a lesson here, which is that, you know, if a deep fake occurs in the forest and there's no one there to view it, there isn't really much of an impact. And so, you know, it wasn't that people were deceived by, say, synthetic imagery in order to, you know, believe something that would shift their vote. The phenomenon we saw that was more ubiquitous is politicians disclaiming true stuff as being fake.
And that is in some ways, I think, the deeper problem with synthetic media. It's not that people end up being persuaded by false stuff. It's that it ends up infecting sort of their belief in true stuff. And that, from a democracy standpoint, is more pernicious.
Yeah, so I've read, you've written about this recently, and it's fascinating stuff. And one of my first impulses, and, you know, I'm an optimist, and I have to sometimes keep this under control, was, well, is there a positive side here that people are now naturally skeptical about everything? And skepticism is not always a bad thing. Again, however, when you're skeptical, you need to be able to act to kind of
clarify your skepticism. So is there a good dynamic underneath here about nobody trusts anything and therefore people are taking a pause and perhaps thinking more deeply about what they really should and shouldn't believe based on what they can and can't see? Or is that just Russ way off the edge of optimism? Well, I'm worried about deep-seated skepticism. I think that most information that we confront in our daily lives is not false. And so the more that we end up
being skeptical of that information, the harder it is to make decisions political and otherwise. And so it's not just that we become skeptical about sort of politically relevant stuff or stuff related to campaigns.
It means that it facilitates lying if people are naturally skeptical of everything that's happening out there. And more importantly, it means that we retreat into our own separate echo chambers of trusted sources
And so we don't have a shared sense of authority among Democrats and Republicans or among different groups. And I think that is a challenge for democracy. Gotcha. Okay, good. So, and I buy that. The skepticism can be pernicious and actually it starts to eat away at things that were much more fundamentally kind of evaluatable as truth and false. So,
I'm amazed. We've been in this conversation for almost five minutes and I haven't said social media, but I've now said it. Your thinking is intricately related to social media and you think about both parallels and intersections between AI and social media. And then their mutual relationship to democracy. So help me unpack the social media aspect of this. So AI sort of turbocharges social media and all the benefits and pathologies of it. So
AI is a tool and is a tool that can be used, as I said before, for good or ill. And so whether it's
making more sophisticated bots or trying to propagate fake imagery or trying to combat that. So the AI tools that the platforms have are absolutely important in trying to take down bad actors. So AI is infused in all of our social media experiences. One thing I will add from a kind of democracy perspective is that
One of the challenges that the internet in general and social media in particular, I think pose is that because
Because we're interacting with computers, it facilitates anonymous communication in ways that anonymous communicators, anonymous speakers never had the megaphone that they had in the pre-internet world. Now, if you take my First Amendment class here at Stanford, you learn that anonymous speech is actually constitutionally protected. The Federalist Papers were written by Publius, after all. Yes, yes. You've seen the play.
Anonymous speech is protected, but one of the things that the internet does is that it's becoming more and more difficult for us to figure out whether we're talking with a person or we're talking with a machine. And the rise of AI facilitates that kind of disjuncture between, you know, that might be critical for deliberative democracy, for us to essentially weigh and evaluate truth and people's content based on who they are and what they say, what they believe.
Whereas now you could have bots and other sort of forms of AI, AI agents that are propagating a lot of the information in the social media universe. In fact, this is something that is now on offer by Facebook. Facebook has been saying that what they're going to do is allow you essentially, they're going to be putting AI content into your feeds to see if you like it.
Okay, so now that you've established very clearly that the social media is the perfect platform to deliver AI information, and I put that in quotes, that leads to the next issue that I know you've been thinking about, which is the concentration of, or the potential concentration of power and AI capabilities in a few big tech companies, and what this means for the
the world and for the United States in particular, when it comes to, uh,
the power that might be concentrated in a few platforms. So can you elaborate, what are the issues there and what should we do about this? - So there are a lot of similarities between the social media universe and the AI universe. And some of the same players like Facebook and Google are in both, or Meta and Google. But I think there are some critical differences as well. And so I wanna highlight those. One of the challenges with say social media monopolies is that these are speech marketplaces. And so the power that a lot of these companies may have
is over the speech and interaction that's politically relevant or otherwise. And so we would worry if one company monopolized the social media universe because their speech rules then could govern what people see and hear. Now, what's interesting over the last five plus years is that the social media universe has actually become more fractured, lots of new platforms. And so there's reason to think that
Some of the concerns that might have been there earlier are not as pronounced now. So then the question is, well, what about the AI companies? Is sort of AI similar to search, where we're going to have one Google effectively that is going to be dominating the marketplace? They certainly are behaving like it is because they're building more and more powerful models
on the thought that whoever achieves like AGI or some powerful model will be the one that wins. And if I may just interrupt you, many of the search engines are now offering you at the top of the page when you do a search. Yes, they'll give you the websites that were hit, but they'll also say, would you like me to give you an AI summary of what I just found? And so you can tell that this is becoming part of their business model. That's right. And so, yeah, so the whole idea of searches is changing. But
Will ChatGPT beat Claude and beat the others? And will there be one chatbot? Is that the future of AI? And there's reason to think that the kind of ecosystem with AI is a little bit different than social media, principally because of the rise of open models. So Facebook's Meta's Lama model is in some ways the most sort of advanced, but there are many others around the world. And in fact,
There are variants on this. There are over 60,000 open model variations that are now on Hugging Face that people can have access to. And that's important because an open model is one – and sort of these are – some people call it open source. I don't want to get into the nitty-gritty on that. But the point is these are models with open weights that you can have access to and that you as a user can fine-tune to your own purposes. And that allows for greater competition –
And in some ways addresses digital divide problems in places around the world where they don't have the billions of dollars that you need in order to build these giant models like OpenAI's model or Anthropix model or Google's. Yeah. So how does the rise – and I am very aware of the open models. We use them in our research –
And as a side note, when I speak to my colleagues who come from non-English speaking countries, they say that the quality of the large language model output precipitously lower than what we're getting in English. And just in terms of the fluidity of the answers and even the sensibility of the answers. So this idea of the digital divide is very real. And I know that those researchers and others at other like less well-served countries are
are very excited about the idea of taking these models and tuning them for their own purposes. There's also cultural things embedded in these models that they don't like, like that's not part of our culture. And so that gives a lot of hope for modifying and having appropriate local models. But then there's downside risk and people are especially worried about open models in the hands of bad actors could create a lot of problems. And so what's your take on that?
That is true and a genuine fear. So in the
first year or so of the chat GPT revolution, perhaps the most identifiable social harm from AI was the explosion in child pornography that came as a result of open image models. And we will never live in a world without an infinite amount of virtual child pornography. And that is a direct result of the fact that these models were widely available. Now, you know, we can...
legitimate arguments can be had about whether, you know, does that cost, is it exceeded by other kinds of benefits of openness? But you're right, which is that if these models are extremely powerful, they can get in the hands of, you know, adversarial governments, they can, terrorist actors, criminals, all that kind of stuff. And we are going, in some ways, this is the biggest question, I think, from the standpoint of AI regulation and policy, which is, are we going to join arms and jump over the cliff and
related to open models. Because to talk about this as if it's like the social media universe, we've got a few actors and you're just going to regulate them is, I think, mistaken. You have to just decide, well, are we going to have a more competitive open ecosystem with the possible downsides that seem to me to be almost baked into the idea of openness?
Now, the open models, just if we can get a little technical for a minute, they come at a cost, which is that it's still incredibly expensive to tune them. Like you were talking about tuning. And my lab works in this area. And there are open models that we can't touch because we don't have enough computers at an academic lab. I am at Stanford University. You might think we would have the resources, period. We do not. So we have to take smaller models. We have to do a lot of tricks to get them to do what we want them to do.
So there is still this issue of access to compute in addition to access to the models. And that seems to be practically a big barrier to this open competitive world that is pretty attractive the way you've just described it. What about that? And I know it seems very mundane. Like we're talking about computers and that Russ can't get the computers he wants. But I'm sure there are countries that can't get the computers that they want.
Well, that's true, but you're going to end up having other kinds of open models that will be smaller and less compute intensive and that the sort of energy and other costs for inference is going to be lower. And so I think there's going to be a diversity of models that are out there. And so you're right that if you want to develop your own models,
sophisticated open model off of Lama. It might be more compute intensive, but there's so much innovation that's happening in that space that I can't remember what Meta has said that Lama has been downloaded like a billion times or something. It's some huge number. And so we should expect...
all kinds of models, some which are computationally intensive, but a lot, as we've seen recently, that are less expensive. Yeah, that's exactly the path we've taken. We're interested in doing research on drugs, and we don't need a large language model that can tell me about the history of America or who won the Oscars or who won the Super Bowl. So it turns out you can do this thing called distillation to make a much smaller model that's good at one thing. And that's actually very attractive for a number of reasons.
reasons because it means your model won't try to, like a human, it won't try to be an expert at something it's not an expert at. And we all know people who try to do that and it would be nice not to have AIs that do that. This is the Future of Everything with Russ Altman. More with Nate Persily next.
Welcome back to The Future of Everything. I'm Russ Altman and I'm speaking with Nate Persily of Stanford University. In the last segment, we talked about some of the basics about how AI is impacting democratic processes and how there are worrisome abilities of AI to interfere with our understanding of the truth and our communications between and among each other.
In this segment, we're going to talk about regulation. How the heck can you regulate AI? Why would you want to? And what is the status of regulatory efforts across the world? I want to talk about regulation, Nate. People have talked about regulating AI, but a lot of people haven't seen any meat on those bones. So what is this? I know you thought about it. You've written about it. What would regulation of AI even look like? And what are the chances of actually getting it to happen?
So there is some meat on the bones. Just right now, it's European meat. So the Europeans have been the most active in pursuing this. And so the European AI Act, as is true with tech regulation generally, is the first mover that may have a pretty important impact on the entire ecosystem. Now, one of the interesting things about the European AI Act, and this is a lesson, I think, for other regulators, right, is that when it was being developed...
and they spent years working on it, it did not include generative AI. And so they were about to pass this law, and then suddenly the chat GPT revolution happens, and they sort of have to go back to the drawing board in order to rewrite the law to deal with these large language models and generative AI. Originally, it was about things like
facial recognition, discrimination, some other things like that. And so I think there's a lesson there about how difficult it is to regulate fast-moving technologies in general, but AI in particular, because of sort of how ubiquitous it is going to be in our lives and all that. And so there are a lot of features to the AI
and we in the Cyber Policy Center have just published an enormous volume called Regulating Under Uncertainty, which is all about this. It's a 450-page canvas of all the regulations out there. And the long and the short of it is that they make a decision about sort of regulating certain sectors, like the application of AI in cars and in jobs and in criminal justice. And some of them are high-risk applications where different types of rules apply. Some of them are seen as low-risk where disclosure would apply.
And then regulating the technology itself, certain rules for these large language model developers or what they call general purpose AI tools. And so I think everybody has sort of been scrambling to figure out, like, how do you identify...
What are the risky innovations and how do you do it in a timely way before they get deployed and then the sort of horses out of the barn? And so we had some, you know, attempts here in California that then were vetoed by the governor. There's going to be, I think, another round of attempted legislation there. But around the world, you're seeing the development of these AI safety institutes to try to come up with norms about the development of these new AI models.
So let me ask about that. And let me go right to a point that I think is right underneath the surface here, which is there's also an element of competition. I think a lot of us believe, or a lot of people believe, that there's an AI competition not only between the companies, but between nations, and that there's going to be an AI upper hand that somebody may have, and therefore others might have the lower hand. And in fact, in particular, there's concern about China's capabilities. And I know that some of the dynamics about...
the worries about the European actions is that they're taking themselves out of this competitive game. I don't know if that's fair, but you may have looked at this and how does competition interact? Well, like, and the idea that, well, in order to win, we have to let all things happen because then we'll figure out what the good stuff is. And then, um, that that's the way we make progress. And if we put gates and guidelines and rules, we're going to shoot ourselves in the foot with respect to compete competition. So how do you see that playing out?
I think that is a fair criticism, both of Europe and innovations in the West to regulate. At the same time, there are potential dangers with this technology. So to sit back and just let the technologists have free reign, right, is inviting real risk. And so that's why I emphasize what's happened with some of these open models at the front end, which is that, all right, you know, we now live in this world
where synthetic imagery is going to be a permanent part of the landscape. And so there are sort of macro regulations and micro regulations. So there are things like, for example, regulating the use of AI in political advertisements, which seems like low-hanging fruit. Other kinds of disclosure regulations to prevent people from being deceived through, say, voice cloning technology. And so all of those kinds of regulations, I think, are sensible.
In addition, regulation is inevitable. And I think people need to understand that, which is that there are certain questions that simply need to be answered by law. So, for example, the copyright questions with these AI models, to what extent can you train these large language models on copyrighted data? Whether it's Congress or the courts that are going to be answering that question, you're going to have law that
applies there. Similarly with defamation, what happens when ChatGPT says Nate personally robbed the bank and that was false? Well, we need somebody to come up with the rules on that. So we need rules of the road. The question is, do we need sort of more than that in order that might retard AI development? And I think part of the critical question here
is whether governments even have the capacity to regulate AI. You do not have inside government the level of expertise that's necessary on the enforcement side to implement any of the most aggressive and innovative sort of regulations that you might want.
And so I think that the future is one in which we sort of regulate AI companies in the same way that we regulate the financial industry, where you have sort of outside private auditors who are paid for by the firms but are regulated to try to prevent conflicts of interest. We know those are always problems.
So that then you have some third party that ensures that the AI companies are not checking their own homework when they make promises about how their models are going to behave. But I think the command and control model, which is kind of the European model, is so dependent on a level of expertise that does not exist in government today that it's going to be very difficult for them to pull off. And you are right about the AI race, which is that, look,
If it is a competition for the model, right, the more powerful model, if you have an adversary, let alone an authoritarian government that ends up being the one that wins the AI race, that is bad for democracies as well. Right. And that's easy to understand. I just wanted to ask quickly about the Global South. They are often left behind technologically and yet are a huge population of the Earth and they're...
beginning to organize and have a voice in all of this, do they get on your radar at all when you're thinking about regulatory approaches and the growth of AI globally? They do, and I think that...
There are sort of several important actors here. Obviously, India, whenever you talk about technology, is going to be, I think, a pretty important player in the AI space. There are concerns that you see about the digital divide and whether they're going to be left out. That's one of the reasons, one of the selling points, as we discussed, about these open models is that it does...
attempt to correct some of that. What was fascinating to me as I did sort of some traveling last year in my role as director of the Cyber Policy Center at Stanford is, like, I remember going to Japan, which admittedly is not the global south, but since we were talking about the west, and meeting with the digital minister there, and I was talking about AI risks, and he sort of stopped me midway, and he says, just tell me, what's the killer app?
What is it that AI is going to be able to do for us? And they were thinking about a totally different way. Same with the South Koreans, less on the risk side, but like how can it solve certain problems? What was fascinating to me in Japan
and in Korea, is that people were thinking about this in the connection with the low birth rate, right? They're like, we don't have enough people. So having robots, having AI make our population more productive is for them a high priority. Yes. And then, of course, I think the answer is healthcare. And this is where I work. And I think in AI, the upsides on healthcare look really good. Okay. So you're a law professor. In the last couple of minutes, what impact is AI having on the
practice and the teaching of law. Is this revolutionizing? Can I fire my lawyer and I'll just develop my paperwork by having a chat GPT develop my contracts? Or is the report of that happening a little premature?
Well, I guess I have to, in my role as a lawyer, I have to say, do so at your own risk. You can do that, but it's risky. And most famously, about two years ago, we saw a lawyer that used ChatGPT in order to write a brief.
and then was surprised that it made up certain cases, and then he was disciplined by the courts. And then the Chief Justice, John Roberts, has also already issued guidance about the sparing use and cautionary use of AI because of the likelihood of hallucinations. My colleague here in the law school at Stanford, Dan Ho,
a series of papers sort of taking down the use of AI in legal research because of the risks of it hallucinating and coming up with false contentions. So we are not there yet, but nevertheless,
As was true with the types of tools you were talking about before, AI will be useful to do certain tasks that lawyers do. I mean, as is true with any writing, doing first drafts of briefs and doing other kinds of legal research where you have much more supervision, you're going to see associates that are going to be using that tool.
But where the real punch, I think, will come is in mass adjudication. So there are backlogs in the U.S. and elsewhere of, you know, say like Social Security benefits, veteran benefits, other kinds of huge backlogs of cases where an AI opinion on so many of those cases
And those types of conflicts and claims will be very important to work through a lot of that backlog. Now, that doesn't mean you take humans out of the loop. You always have to have the opportunity for appeal to a human to validate whether the decision was right or not. But we're in the worst of all worlds right now, which is that, yeah, you have human review of these decisions, but no one's actually doing the review. And so they're just sitting there without the claims being resolved. Yeah. And I do know that humans are better at reacting to
advice than coming up with it. So even a system that was only okay, it would focus the attention of the human decider to say, okay, the AI thought this. I get the idea. Let me see if I believe that based on a perhaps more cursory review of the key points in the document. Are law students getting this message from you and not using ChatGPT in their legal writing? Ha, ha, ha.
Well, I tell them they can actually use it on their papers with me because I think we as professors just have to recognize that they're going to use it anyway. And so better that they use it responsibly. The rule in my classes where I write assigned papers is that you can use AI, but if there's one hallucination, you fail.
And so it's sort of the nuclear option to discipline them. Wow. There you go. And so far, so good. I've seen some studies that show that AI does better or worse in certain subjects. And the one area where it's really, really bad is election law. And so when I teach my election law class, I'm like, I put it.
put the exam through ChatGPT and so far it hasn't written an A exam. Well, that's good. It's good to know that you're at least right now your profession and your specialty is not at risk. Although if you write too much, they'll be able to train it to be much better and we'll have a little AI Nate. That's right. Well, I look forward to that, you know, cut down on my workload as well.
Thanks to Nate Persily. That was the future of AI and democracy. Thanks for tuning into this episode. You know, we have more than 250 episodes in our archives. And so you can listen to a wide variety of conversations about the future of anything. If you're enjoying the show, please rate and review it. We like to get 5.0. Do what you think is best. It'll help us spread the word and it'll help people find out about the show who might be interested.
You can find me on a lot of social media like Blue Sky, Mastodon, Threads, at RB Altman or at Russ B. Altman. And you can also find me on LinkedIn, Russ Altman, where I announce all of the new episodes. And you can also follow Stanford Engineering at Stanford ENG.
If you'd like to ask a question about this episode or a previous episode, please email us a written question or a voice memo question. We might feature it in a future episode. You can send it to thefutureofeverything at stanford.edu. All one word, the future of everything. No spaces, no underscores, no dashes. The future of everything at stanford.edu. Thanks again for tuning in. We hope you're enjoying the podcast.