The Wired AI Elections Project tracks how generative AI is impacting elections globally, with a focus on deepfakes, AI-generated content, and AI candidates. It includes a geospatial map showing instances of AI use in elections, categorized by region and type of use.
Deepfakes are being used in both sanctioned and unsanctioned ways, such as the Biden robocall discouraging voters and Imran Khan's authorized deepfake declaring victory while in prison. They are often emotionally resonant, even if people know they are fake, and are harder to detect in audio form, especially in regions like South Asia.
The liar's dividend refers to the idea that when everything can be falsified, nothing is real. This concept is amplified by AI, as people may question the authenticity of even real evidence, leading to a blurring of shared reality and truth in political discourse.
AI candidates, like A.I. Steve in the UK and Yas Gaspadar in Belarus, are being used to represent political dissidents who cannot run in person due to safety concerns. These AI avatars allow for virtual representation and engagement with constituents, bypassing physical limitations.
Platforms struggle with detecting AI-generated content due to lack of transparency in detection systems and inconsistent labeling practices. Watermarking and hashing technologies could help, but they require universal adoption and cooperation among platforms, which is currently lacking.
In countries like India and Pakistan, AI is being used for deepfake audio messages and personalized outreach via WhatsApp. In Indonesia, AI avatars are used to create viral campaign content on TikTok, reaching billions of views and influencing voter perception.
AI can influence voter turnout and perception by creating personalized outreach that makes voters feel seen, even if it's automated. In some cases, AI-generated content can go viral, significantly impacting a candidate's popularity, especially among younger voters who may not have historical context for certain political figures.
The broader implications include a potential erosion of trust in institutions and shared reality, as well as the need for increased investment in trust and safety measures on platforms. The lack of these measures could lead to more AI-generated disinformation and further polarization.
Ryan Reynolds here from Int Mobile. With the price of just about everything going up during inflation, we thought we'd bring our prices down.
So to help us, we brought in a reverse auctioneer, which is apparently a thing. Mint Mobile Unlimited Premium Wireless. I'm ready to get 30, 30, ready to get 30, ready to get 20, 20, 20, ready to get 20, 20, ready to get 15, 15, 15, 15, just 15 bucks a month. Sold! Give it a try at mintmobile.com slash switch. $45 upfront payment equivalent to $15 per month. New customers on first three-month plan only. Taxes and fees extra. Speeds lower above 40 gigabytes each detail.
The greatest gifts of all time. Yeah. Head to your Nordstrom Rack store to score. Great brands, great prices, the greatest gifts of all time.
For all of your residential and commercial heating needs this year, call the 5 Star Heating Experts at Crop Metcalf. 1-800-GO-CROP or visit cropmetcalf.com. Crop Metcalf is the one with 5 stars. Crop Metcalf, home of the 5 Star Technician.
Hey, TED Talks Daily listeners. I'm your host, Elise Hu. Today, we have an episode of another podcast from the TED Audio Collective, handpicked by us for you. It's been a big election year, and it's the first one where AI has dominated the conversation. Generative AI has made it easier than ever to create voter confusion through deepfakes and misleading false news articles.
This week, we're sharing an episode of the TED AI Show all about this emerging technology's impact on our political landscape. Journalist Vittoria Elliott shares how people are using artificial intelligence programs right now and what governments need to do to protect future AI chaos. If you want to hear more fascinating discussions about AI, listen to the TED AI Show wherever you get your podcasts. Learn more about the TED Audio Collective at audiocollective.ted.com.
Now on to the episode. So I've been thinking a lot about the elections this year because this year feels different, but not for all the reasons you're thinking about. I'd say this year, national and local elections are objectively different from any other because it's going to be our first AI election. We're less than 100 days until election day, and you've probably seen it amidst all the other election news. AI is entering the chat.
But it wasn't until I was scrolling on X one day that I realized what the next few months might look like. X had just updated Grok, their AI chatbot, so that it could generate images using a largely uncensored open source model called Flux. And immediately, the results were far more unhinged than anything we've seen.
Now, we've been able to generate images in the past, but this is the first time we've had these capabilities put into a social media app that 250 million people use every day.
I saw photorealistic pictures of Kamala Harris and Donald Trump in bizarre situations that range from easily clickbaitable images of them lovingly holding hands to skin crawling images of the two of them celebrating 9-11. Suddenly, my feed was full of wild Harris and Trump memes, and I started to realize it's kind of impossible to imagine a future where AI won't have an impact on elections moving forward.
It feels like uncharted territory. But actually, in many ways, the U.S. is late to this specific party. Other countries around the world have been adapting their elections alongside the rise of AI. And it goes beyond just memes. Deep fakes and chatbots have been deployed in surprising ways in some of the world's biggest elections, like in India and Pakistan. And in Europe, there's even AI candidates that have run for office.
So, what can we learn from the rest of the world to prepare for our first AI election? And is it really just doom and gloom? Or can AI actually be good for democracy? I'm Bilal Velsuddu, and this is the TED AI Show, where we figure out how to live and thrive in a world where AI is changing everything. So good, so good, so good. Just in and so good. Thousands of winter deals are in Nordstrom Rack stores now, and that means...
Thousands of fresh reasons to rack. Because we get the latest trends for way less. Because I've been looking for these. Because the best deals go fast. Save big with up to 60% off. Sam Edelman, Sorrel, Free People, Cole Haan, and more cold weather finds. Great brands, great prices. That's why you rack. Ryan Reynolds here from Intmobile. With the price of just about everything going up during inflation, we thought we'd bring our prices down.
So to help us, we brought in a reverse auctioneer, which is apparently a thing. Mint Mobile, unlimited premium wireless. How did it get 30, 30, how did it get 30, how did it get 20, 20, 20, how did it get 20, 20, how did it get 15, 15, 15, 15, just 15 bucks a month? Sold! Give it a try at mintmobile.com slash switch. $45 upfront payment equivalent to $15 per month. New customers on first three-month plan only. Taxes and fees extra. Speeds lower above 40 gigabytes each detail.
Today, I'm talking with Vittoria Elliott, a reporter at Wired who specializes in disinformation and social media platforms. And she's in the middle of a massive endeavor to collect stories about AI and elections from around the world for Wired's AI Elections Project. If you go to the website, you'll see they've got a geospatial map that's basically tracking all the ways this tech is being used. When I checked it out, I was struck by how important a tool like this can be.
It's a perfect example of what I'd call open source intelligence, a compilation of public sources that can reflect relevant insights and build awareness. And who knows, maybe those insights could be the key to updating our democracy's firmware ahead of our first AI election. I was really interested in asking Vittoria to walk me through some of the most important stories she'd come across and how AI is reshaping elections around the globe. Hi, Vittoria. Welcome to the show. Hi, Bilal. Well, thank you so much for having me.
So let's get right into it. Can you start by walking us through the Wired AI Elections Project? Yeah. So this year is the biggest election year pretty much ever. There's about 65 elections that are happening around the world. This is a massive year for democracy. And it's also not only the biggest election we've probably ever seen, but it is certainly the biggest election since the advent of the Internet and social media. And I think we saw it as a moment to really take stock of
of how technology and democracy are interacting, particularly in this moment of generative AI. And so part of it was to see how is this actually going to impact democracy, but also, you know, are the things that people were afraid of, are those the trends we're going to see or is it going to be something else? And how are those trends going to look at
differently in different parts of the world. So much of technology discourse is really U.S.-centric because a lot of big technology companies are U.S. companies. But that doesn't mean that's what that technology or that innovation is going to look like in other places. That doesn't mean that's how it's going to be used in other contexts. We have a big world map that shows every country that's having an election this year and then whether or not there's been instances of generative AI related to that election showing up.
So this could look like the Joe Biden robocall that happened right before the New Hampshire primaries where a voice clone of Joe Biden called all these voters in New Hampshire, discouraging them from voting in the Democratic primary. That's an instance of generative AI. But then we're also seeing things like.
in Pakistan, the former prime minister Imran Khan, a deepfake of him that he authorized, that he, his party authorized declaring victory, speaking to people because he was in prison. And so we see both these like really interesting examples of something being like sanctioned and unsanctioned. So we have these types of examples in the project and they're broken down into regional categories like North America, Africa, Europe,
etc. And so a user can look at the map and see how far this technology is reaching in terms of where it's showing up in elections and a description of what happened, when it happened, where it happened, and then a link out to like
I think it's super exciting to have this all collated in one place. It's also slightly disconcerting. And on that note, you know, I think when people think about generative AI and elections, people immediately jump to deepfakes.
How are deepfakes affecting campaigns in the U.S. so far, both at this national level? You gave the Biden example, but also the local level. The deepfakes, I think, are the most visible form and sometimes the easiest to detect. So I think that's why we see so many instances of that. And I think even in the project, you'll see that there is a bias towards visual media because that's just easier in many cases to confirm that something's fake.
And a lot of times, just because we know something's fake doesn't necessarily mean it's still not emotionally resonant. So I think a really great example of this is a couple weeks ago, Elon Musk, owner of formerly Twitter, currently X, tweeted a parody video of Vice President and Democratic presidential nominee Kamala Harris saying she was the ultimate DEI hire, which is one of the things that the right really likes to say about her. But it was using her voice, saying that in the style of a campaign ad, saying,
And when he initially tweeted it out, it didn't have a disclaimer saying that it was parody. It didn't have labels saying that it was AI generated, which many platforms require. But I think even people who really do believe that about Kamala Harris believe.
might know that that video is not real, but it's still really emotionally resonant because it's the voice of this politician saying something that people kind of already believe to be true. It's less about necessarily that people are getting tricked than the fact that they are seeing out in the world represented something that up until this point, they've sort of only personally believed to be true. And then I think when we're looking at the local level, a really interesting example
is Peter Dixon, who's a congressional candidate in California. He used AI in a campaign ad to sort of like look like he was jumping through these various points in his life in different locations to sort of illustrate his background. And then we had the example of Jason Palmer, who is an investor and businessman who ran in the Democratic primaries in American Samoa. And he used very openly AI to conduct that campaign. He used AI to...
have a deepfake avatar of himself and it answered questions about his public policy people could ask it things so you know there's this sort of broad swath of how it can be used in a really sort of legitimate way to be like hey this man is not going to necessarily fly to american samoa but how can people in american samoa feel connected to this candidate maybe it'd be good for him to have an ai avatar that can answer questions that can state his policy positions um
And then, you know, on a more hyperlocal level, there is someone who's actually running for mayor of Cheyenne, Wyoming, as an AI candidate, as in he has created an AI bot. And the bot's name is Vic, virtual integrated citizen. And
Vic is ingesting thousands and thousands of city council documents to make policy decisions. And that's what we're seeing on a local level. And that's not a deep fake. So I think we're seeing them used in all these really innovative ways that go beyond just trying to trick people, but more around how can they be useful to campaigns? And very specifically, how can they make people feel connected to these issues, even if people know that the thing they're dealing with is not a real human?
You know, it strikes me that there is this sort of sliding scale of like pure utility to, let's say, like deceptiveness on the other end. And it is a blurry line. The example that you gave of, you know, a politician basically making themselves more accessible, almost like a async virtual town hall where you can go ask this politician questions and maybe learn about their views a little bit better in a more intuitive fashion. It's also interesting to just see folks like bolster their like the media that they're going to push out there to like
illustrating the various stages of a person's journey and being able to like bring people along for the ride and, you know, just make that feel a little bit more transportative and authentic, I think is very exciting. And then the last piece, which is, you know, the meme one, you can create content that is overtly like, hey, this is intended to be fake, but it can still have a visceral impact on you, both in a good sense and maybe in a negative sense too, where it kind of bypasses your
Because it has this emotional reaction in you. And even if you know, hey, this is factually incorrect, you're still going to be influenced by that. Right. Totally. I mean, I think when we think about the use of AI and satire and parody in situations where, again, you're.
people know it's fake, that just because people know it's fake doesn't mean it's not resonant to them. On that note, the lines do get blurry at times. You know, we're getting to this point where events and photos are being called in quotes AI. Of course, the recent example is, you know, Donald Trump claimed that, you know, Kamala's crowd size was AI'd in quotes. How do you think this dynamic alters how to run a campaign when you can like call out like factual things as being fake? I
Even though I'd assume these are exceedingly photographed events. So there's actually a term for this. It's called the liar's dividend. And it's sort of the idea that when everything could possibly be fake, nothing is real.
as someone who used to cover specifically tech in the global south, everything comes home to roost. We see it elsewhere before it comes here. We saw that with the abuse of social media for disinformation campaigns in places like India or in the Philippines before it became a problem in the U.S. And I think when we're talking about the liar's dividend,
And same thing last year in India. We had politicians who real leaked recordings of them were being shared saying bad stuff. And their immediate response was, that's fake. That's not real.
It's AI. And, you know, I think back on, you know, the Donald Trump access Hollywood tape. I think if that happened now, he would just say that was fake audio. He would just say that was AI generated. And so I think what we're really going to see is this further blurring of a shared sense of reality and a shared sense of truth, because if nothing is real, anything is possible. There's something so interesting.
dare I say, earth shattering about that? Because it's like at the core of weight, I can't trust my eyes anymore because, you know, there's real evidence of wrongdoing. Now more and more people are deploying it and employing it. I have to ask you, when you see this happening and now you're tracking all of these examples, good and bad, where do you net out on your like
zero to 10 scale of like excited to extremely scared spectrum? I think I net in the middle, mostly because what this to me shows is the incredible misprioritization of what we're innovating for on a sort of grander scale, rather than like thinking about the social impacts. Silicon Valley has always been move fast, break things. And I think...
The AI thing is always very interesting to me because it repeats so many of the mistakes of Web 2.0. The idea of deploying technologies before we know their social impact, the idea of not thinking about how something's going to be used outside of the exact context that you thought you were designing for, not necessarily being able to differentiate like what's real and good and what's not, not being able to control who uses your product.
Like all of these sort of baseline things that we're still trying to figure out, like we don't even understand on a policy level and even I think on a company level how we deal with content moderation. That has been the conversation from basically 2010 onwards has been how do we deal with content moderation on Web 2.0? And instead of thinking about like how can we deal with this, it's all...
all the same mistakes, all the same behaviors all over again in different iterations for the AI revolution. And it doesn't really feel like we've learned very much. Now, it strikes me even Web 2.0 and, you know, let's say social being an exemplification of that.
It ended up being quite valuable in the, you know, election context as well. Right. I remember reading this article where it's like when the nerds go marching into the White House, this is like Obama's reelection campaign circa 2012. And so I'm kind of curious, given the tools at our disposal, do you think that AI could be an effective policy messaging tool?
If it can be factual. And it seems like there are certain approaches to make it more factual than not. Yeah. I mean, the biggest issue is transparency. So I think a really good example, for instance, is like Indonesia during their election this year. The man who was formerly the head of the military under the country's former dictator won that election.
And there were people who worked for his party that openly admitted to using AI and specifically to write campaign speeches. And that's great, except apparently they had built that on top of ChatGPT, which OpenAI has said we don't want our product used for politics. How are they tracking that?
How are they ensuring that's not happening? So I think like, yeah, there could be really valuable ways for this to be used for, say, you know, campaign messaging or like the example that we saw in American Samoa where someone can sort of, as you sort of described, like an asynchronous town hall.
You know, this kind of brings me back to how you see social media factoring into all of this. I mean, since the last presidential election, we've seen how important it is, you know, in campaigning, but also in spreading misinfo and disinformation. The FEC made a decision earlier this year that they wouldn't put new restrictions on AI and political ads, which means it'll be up on platforms. And you gave this example of somebody like illustrated the journey and stages of their life in a campaign ad.
What are platforms doing right now to like prepare for this like very pivotal year to curb perhaps AI generated content related to elections? Are they trying to get ahead of it or do you think it'll again be sort of more responsive and retroactive? So first off, platforms, you know, everything they're doing right now is voluntary. Number two, you know, a lot of them have really leaned into labeling where they're saying, you know, it has to be labeled if there's generated AI content on our platform.
But so often it is very difficult for them to detect AI generated stuff on their platform. We don't really have a ton of transparency into how they're detecting this stuff, what systems they have in place, etc.
companies are not necessarily sharing data back and forth about what's being generated on their platforms. So something can be labeled consistently across platforms. And we saw this also with disinformation issues where like, you know, maybe something would get taken down on Twitter, but it would live on Facebook or maybe it would get taken off of Facebook, but they'd still be platformed on YouTube. Like, you know, there's all these sort of
gaps because these companies are not sharing information or coordinating with each other except on things like child sexual abuse material or like terrorism where there's actual legal implications, right? A lot of companies that have AI models have said, you know, we're going to start watermarking, meaning that anything that comes off our platform, it'll have some sort of signature on it, whether or not that's visible to a human or not, but that a machine can read to say, hey, this is AI generated.
OK, well, that's great. But that implies that everyone has to be a good actor. Yeah, exactly. Everyone has to agree to watermark and everyone has to be able to read everybody else's watermarks. Right. If you're a bad actor, you have no incentive to use technology that's going to watermark your content. You know, we're talking about something that's detectable by a machine. But will that be detectable by a human? Who knows?
Yeah, this is going to be so challenging. You're totally right. It's like, yeah, the good actors, you know, oh, yes, you know, there happened to be some watermark in the image I uploaded from a commercial image generator. Are people, as they scroll through their feed, even actually going to care about that? And certainly bad actors are actively going to try and avoid, you know, either removing the watermark or using tech that doesn't have it. And that's on the visual side. I mean, I'm even curious sort of about like,
Fake engagements and bots have been a thing for at least a half decade plus now. This is a well-publicized strategy from 2016, right, where Russia sowed discord on social media and their strategy was almost like, hey, let's just start stirring, you know, these sort of very polarizing issues online.
And, you know, you emulate the kind of rage bait that then drives engagement. And it's a total mess. And do you think there's a strategy for addressing sort of AI generated bot farms that can start impersonating voters and sort of add themselves to this public discourse? Yeah, I mean, that is a massive problem. It's actually very interesting. OpenAI released it.
its first threat report at the end of May. And what it showed actually was that foreign actors still are trying to figure out this technology. Like they still are not sure how useful it is. You know what I mean? Which I thought was very funny. I was like, oh, I guess we're all confused. But we definitely do see instances of them. So a really big strategy that we see in foreign influence operations is they're
They will link to websites that are meant to look like a legitimate information source. And then they will use generative AI to populate articles that reinforce certain political views. Those articles then get shared on social platforms. So it's not necessarily the bots or the content themselves, although those sometimes are also AI generated. Right.
It's creating these websites that are populated with chat GPT style AI bullshit articles and then sharing those on social platforms. But we're definitely seeing foreign influence operations experiment with this. And again, I think, you know, it's one of those things like they're still testing it out, too, and seeing what's most effective. After the break, I asked Vittoria about how AI is reshaping elections around the world in ways you might not expect.
So good, so good, so good. Just in and so good. Thousands of winter deals are in Nordstrom Rack stores now. And that means...
Thousands of fresh reasons to rack. Because we get the latest trends for way less. Because I've been looking for these. Because the best deals go fast. Save big with up to 60% off. Sam Edelman, Sorel, Free People, Cole Haan, and more cold weather finds. Great brands, great prices. That's why you rack.
Ryan Reynolds here for, I guess, my 100th Mint commercial. No, no, no, no, no, no, no, no, no. I mean, honestly, when I started this, I thought I'd only have to do like four of these. I mean, it's unlimited premium wireless for $15 a month. How are there still people paying two or three times that much? I'm sorry.
I'm sorry, I shouldn't be victim blaming here. Give it a try at midmobile.com slash switch whenever you're ready. $45 upfront payment equivalent to $15 per month. New customers on first three month plan only. Taxes and fees extra. Speeds lower above 40 gigabytes. See details.
So I want to transition to global elections because, you know, as you mentioned, the global angle here is very interesting because oftentimes we're seeing, you know, the initial instantiations of these technologies being used both for good and bad overseas before it makes its way back here. It feels new to us in the United States and maybe even in Europe.
But the rest of the world's like, yo, we've been dealing with this. So I've got friends in Pakistan who've been WhatsAppping me and they're like, yeah, defake politicians, you know, new voice recording. So common. There's like another one floats around on WhatsApp groups almost every week. So how has AI normalized this kind of content in Pakistan? South Asia, particularly India, Pakistan, places with really high concentration of like highly educated people and tech talent.
That's where we're seeing a lot of this. And I think, you know, in Pakistan, there's, you know, the sort of legitimate use of this, as we mentioned with Imran Khan. And then during the elections, there were also deep fakes of local politicians telling people not to vote, telling people that they were dropping out of the race. And I think one of the big things, particularly from the global south, is that
Audio messages are particularly common, especially when people use WhatsApp. So there's a ton of instances of audio deepfakes, and they are very difficult to detect because you're not going to have the same signals that you would have with visual medium like, hmm, that guy has six fingers or like, hmm, that looks a little glitchy. It's purely one medium. And a lot of times it circulates on platforms like WhatsApp, WhatsApp.
not publicly on social media like X or Facebook. It's circulating in these closed communities on encrypted platforms. It's incredibly difficult to detect and harder to debunk. India and Pakistan, they both have a lot of really fabulous tech talent, and we see a lot of companies coming up to service that market. But in general, most of the AI that tools that we're seeing and the
The tools to detect AI-generated content are trained on and built for data from the Western market. When we're looking at markets in the global South, people are recording stuff on phones that are not as advanced as like an iPhone, meaning that the baseline quality of the content might be lower. And that makes it so much harder for these detection tools to flag data.
if something's fake. False positives and false negatives are so much more common when stuff is coming off of these lower quality phones. Secondly, when you are working with markets that speak English in a non-Western way, so like pigeon forms of English, accented English, there's more likely to have content that's flagged as a deepfake, like AI notoriously bad when it comes to non-white people. And so I think...
when we're talking about places like Pakistan and the global south you know there is a lot of interest in these technologies there but also the ways in which they are used and the ways in which they can or cannot be detected is totally different and these are all massive problems where like this kind of technology is entering these markets but also you know creates all these other problems that frankly these companies they're barely prioritizing working on issues here you know
Totally. And I'm curious, you know, with all these instances, is it actually changing how people ended up voting or like change voter turnout?
So I think that really depends on where you look at. A really great example came from actually a Wired story that I didn't write. It was written by a friend of mine, Nilesh Christopher, and his reporting partner, Varsha Bansal, out of the Indian elections. There are a ton of AI companies that are coming up in India now servicing the Indian market. And obviously, India is a massively diverse place in terms of language, religion, culture, etc. And they found that in the lead up to the election,
to the Indian elections, a lot of local politicians were employing these AI companies to create deep fakes of themselves, similar to the Joe Biden robocall. But they were authorizing it and they were calling people's numbers the numbers of their constituents. And, you know, in the U.S., we might consider that kind of thing really annoying and invasive. But what Nilesh found was that actually people
felt that sort of personalized outreach, even though they knew it was an AI tool, even though they knew it was automated. They felt very seen by that and they felt very considered by that. And it actually did make them want to vote for a particular candidate to feel that they were getting this personalized attention through the use of these AI tools. So in that way, you know, it can be extraordinarily powerful. In Indonesia, for instance,
during the campaign, Prabowo Subianto, who ended up winning the election, the former general under the Suharto dictatorship. They used Midjourney to create a avatar of him looking very cute, friendly grandpa sort of vibes. And they shared those all over TikTok and it got 19 billion views. And it definitely helped make him popular.
popular with young people who don't remember the Suharto dictatorship. They didn't live through it. And so they were susceptible to this sort of softer image of him. And that was totally authorized by the campaign. You know, there are ways in which this technology can be used for campaigning and image management and all this stuff. And that can really maybe affect how people perceive a particular candidate.
Absolutely. I mean, it's interesting, the India example that you also brought up earlier. There was a company called Rephrase AI that would Shah Rukh Khan, one of the biggest Bollywood actors, he did like a Cadbury chocolate advertising campaign where the call to action like purchase the chocolate was like the local confectionery store across all these regions. It's like, hey, if you're going to go buy this stuff, go buy it from here.
And now it's interesting to see this being applied in a political context. I think that could make a huge difference, right? Like if it feels like personalized outreach and then as far as distribution goes, right, there's the one-to-one stuff that we're talking about. There's the TikTok example, maybe on the other end that you mentioned where it's like, yeah, if these videos go viral, of course, that's going to make a dent, at least like put you top of mind for a lot of presumably, you know, in this case, young voters, right?
The stuff in the middle, going back to WhatsApp, seems more concerning because it feels like harder to moderate and stop the spread. Like I recall a few years ago in India around Kashmir, there was a little bit of instability and sort of like the Internet got shut off. But then also WhatsApp basically around that time instituted.
hey, there's only a limited number of people we're going to let you forward a message to. And the idea is like that's one way to sort of cap the spread because, you know, sensibly if we're talking about end-to-end encrypted channels, like how the heck are you supposed to, you know, stop the spread of misinformation? And by the time the analysts go and send it to like, I don't know, experts in Europe to go, you know, assess the deepfake, like the damage has been done, right?
Yeah. Well, and I think, you know, again, if everyone was sort of agreeing like, hey, we're going to watermark or we're going to use a hashing tool for it. So like that's how they deal with encrypted messengers. That's how they deal with like child security.
sexual abuse material is hashing. So a image will be hashed. It'll be entered into a database. It'll have sort of the specific signature. And if you try and send it before it even leaves your phone to sort of go into that space of being encrypted, it'll be sort of like caught by the platform and you will be unable to forward it.
Because you're looking it up against this database. Exactly. Because it'll be hashed again. It's sort of digital signature. The hash will sort of ensure that. So maybe there would be a way to say we're going to hash everything created by AI, anything that, you know, gets forwarded that can be sort of checked against this hash database.
auto label as AI. There might be a way to do that if everybody agreed and invested in that. But that's an immense amount of time, technology. And that's the thing that I think people don't really think about when we're talking about all this creation of AI generated content is now you're requiring the other side of it, which is detection.
You know, now you've created a whole other industry that's also about detection. And that also requires an immense amount of technology and investment and time and money to scale up to respond to this problem. Totally. I really want to talk to you about A.I. Steve. You know, my mind was really blown and I think it's super it's a super interesting story. So why don't you start by introducing A.I. Steve?
Yeah, so he was a British political candidate who stood for parliament from the town of Brighton. And A.I. Steve was literally the candidate. He's the digital avatar of actual Steve, a real man named Steve, who was running for office. And sort of the way the campaign described it to me was,
actual Steve would be the physical embodiment of A.I. Steve. He would be in parliament doing the negotiations, talking to people, but all of his decisions would be informed by A.I. Steve. And A.I. Steve was a sort of in that similar way, an A.I. avatar who could respond to constituents and
and collect their questions. And the point of having this AI model was that constituents could say, here's what we care about. Here's how we want you to vote on things. Here are the issues that matter to us. And then real Steve's job was to go to parliament and do the actions dictated by AI Steve. And AI Steve did not win.
But in principle, I think that's a really interesting idea. And actually, the campaign told me that the two things that people were most interested in when they first launched it were the conflict in Gaza and trash collection.
But, you know, I think even then that has sort of its own limitations, right? Because members of parliament, members of Congress get classified briefings all the time and are making decisions based on information that the general public may not have. And so there would still be a negotiation you'd need to have with that sort of campaign commitment to make all decisions in line with what the AI has been able to collect from constituents. Because the reality is the AI is not going to the classified –
national security briefing. The AI is not being given special documentation and numbers in the way that actual members of government are. And so there probably is a way for AI technology to be incorporated into a sense of good governance. But I think that's not what we're seeing prioritized in what's being built. And that's certainly not necessarily, I think, possible when the AI itself is the candidate. Right.
well, you certainly answered my question of like, how would this even work? And so it's almost like,
This is like a digital twin or proxy in the public domain for a politician. And really, it's a way of sort of getting a pulse, you know, of your constituents, like a pulse of the nation, if you will. But this isn't the only example, right? There have been other examples of AI candidates around the world. Can you tell me the story about the Belarus elections earlier this year? Belarus is a dictatorship and one might argue a sort of proxy country.
of Russia. The Russian military currently is staged out of Belarus for its war in Ukraine and its dictatorship, the Lukashenko dictatorship, is one of the last remaining ones in Europe. And
There has been incredible crackdowns on dissidents. One of the dissidents, who is in exile, created an AI candidate called Yas Gaspadar to run in the country's parliamentary elections. And obviously, this candidate did not win. The elections in Belarus earlier this year were widely regarded to be unfree. But because
dissidents is so criminalized in Belarus. And because many of the people who might have actually stood for elections are in exile, this was sort of a way of using AI to represent those people. You know, he can't be arrested. He cannot be subject to the ways in which the government has used force to crack down on dissidents in the past. So I think this was actually a very clever use of AI. Just like, you know, we saw in
people in the Middle East use social media to really power the Arab Spring. Like there are good, generative, democratic, creative uses of these technologies and people will find those. But that doesn't mean that's always like what they're geared towards or the most common use of them. Totally right. I mean, this is a perfect example of AI being effective if you are a political dissident, right? Like I was blown away by that one quote in that article, which is basically like,
yeah, this person can't be in prison because he's just code. And I was like, wow, that's one way to give a voice to the voiceless. So yeah,
We've gone through a lot of aspects of the current state of, you know, let's just call it the state of AI elections in the U.S. and some lessons learned from other countries. I think at this point, I just have to take a step back and ask, like, how concerned should we really be about the ways that AI could affect the 2024 election? In the U.S., I don't know yet. Yeah.
I think there are a lot of things at play. And I think one of the things we didn't touch on, which is important to note, is that social platforms have rolled back their investment in trust and safety. And trust and safety are the people and teams who make sure that hate speech, disinformation, all that stuff stays off the platform. And then on top of that, we're sort of adding this extra layer. So I think we are certainly going to see a ton of AI bullshit on all the platforms. I
I think we're going to see more of it. But I think the real thing is, personally, I am less concerned about the elections themselves and more about what happens after.
We are in a moment where there is less and less and less trust than there's ever been. We have now Musk, who owns X. He has already started seeding narratives around, you know, illegal immigrants voting, things that could very easily sort of form the intellectual foundation to question a Democratic victory.
Those are old problems that platforms still haven't solved. And I think we'll see AI play a role in those, whether that's through disinformation campaigns, whether that's through AI-generated media, whether that's through, for instance, Grok AI recently returned answers saying that Kamala Harris had missed the deadline to register to be on the ballot in nine states. And the secretaries general of five of those states had to write to X and say, your AI chatbot is spitting out bad information. I think we'll see things like that.
But I think the core issues underlying this, which is a lack of investment in trust and safety, a lack of investment in thinking about the implications of these technologies and the safeguards necessary, and real lack of trust and sense of reality in terms of the shared world we're living in and trust in institutions and systems.
underpin this question more deeply than any sort of variation in the technology could. That is a very profound and nuanced answer to end on. Vittoria, thanks for being on the show. Thank you. After talking with Vittoria, I had a couple of ideas about how we can prepare for the next few months of the election season.
The first thing I'm thinking about is actually something we covered in the very first episode of this podcast. When I spoke with Sam Gregory from Witness, he recommended a good exercise in media literacy that I think still applies beautifully. SIFT. As a refresher, that stands for Stop, Investigate the Source, Find Alternative Coverage, and Trace Down the Original Content that You Think Might Not Be Legit. Especially since the political landscape is changing at this unprecedented pace, SIFT
it's very important to always hit those four steps and have a wide range of trusted sources at your disposal. But the second thing is a little harder to break down into specific steps because it has to do with this concept that Victoria brought up.
The Liar's Dividend. That's the idea that in a world where anything can be falsified, nothing is real. And so it kind of doesn't matter if you can identify disinformation out in the wild. Its message can still very much influence people, even when they know it's not true. It's why memes can be such powerful weapons for political messaging. They can bypass that part of our brains that question if something is real and go straight to our emotions.
When I saw those AI-generated memes that I mentioned at the top of the episode, I realized that introducing this tool months before election day was opening the door for full-blown memetic warfare. Which means we're all going to have to come to terms with the fact that none of us are exempt from being influenced by content, AI-generated or not. It's going to be hard, but we've got to modify our information diets. That means adjusting how we let content impact our perceptions of reality.
There are some really exciting possibilities for AI technology to make politics more accessible, more representative, and maybe even more revolutionary. But ahead of our first AI election, it'll be up to us to learn from the rest of the world and chart our own path towards the future.
The TED AI Show is a part of the TED Audio Collective and is produced by TED with Cosmic Standard. Our producers are Ben Montoya and Alex Higgins. Our editors are Banban Shang and Alejandra Salazar. Our showrunner is Ivana Tucker and our engineer is Asia Pilar Simpson. Our technical director is Jacob Winnick and our executive producer is Eliza Smith.
Our researcher and fact checker is Christian Aparta. And I'm your host, Bilal Velsadu. See y'all in the next one.
Sam Edelman, Sorrell, Free People, Cole
Han and more cold weather finds. Great brands, great prices. That's why you rock.
Join Bank of Clark as we make an impact through our Give with BOC campaign to make a difference in our community. For the rest of 2024, our branches have chosen a charity to support through our hashtag Give with BOC. To learn how you can help, visit our website at bankofclark.bank slash givewithboc or follow us on our social media channels and the hashtag givewithboc. Show your support and consider giving at your local branch today. Bank of Clark, we're the bank for that.
Happy holidays from all of us at Bank of Clark. Member FDIC. Equal housing lender.