Earlier this week, journalists at Wired and The Washington Post reported that a Russian-aligned propaganda network notorious for creating deepfake whistleblower videos appears to be behind a coordinated effort to promote false sexual misconduct allegations against vice presidential candidate Tim Walz.
At Wired, David Gilbert wrote that researchers have linked a group they're calling Storm 1516 to the campaign against Walsh, which he says has a long history of posting fake whistleblower videos and often deepfake videos to push Kremlin talking points to the West.
A few days earlier, NBC News also reported on Storm 1516, citing its work as demonstrative of Russian propaganda's growing utilization of artificial intelligence and more sophisticated bot networks. Your phones are getting it, your search engine's getting it, your computer's getting it, everybody's getting it on AI platforms.
even Russian propagandists. Two days after that Wired report, Washington Post journalist and Russia expert Catherine Belton reported on one of the other bad actors implicated in spreading the allegations against Waltz. I'm talking about John Mark Dugan, a former Florida cop with a long and winding record that includes internal affairs investigations,
an early discharge from the Marines, and a penchant for posting confidential data about thousands of police officers, federal agents, and judges on his blog, which led to 21 state charges of extortion and wiretapping. To escape that indictment, Dugan fled to Moscow, apparently because he'd been romantically involved online with a woman there.
Whatever. In any event, once in Moscow, he put his conspiratorial blogging skills to work, effectively enlisting himself in the Russian intelligence community's internet war against America.
Belton reviewed more than 150 Russian documents obtained by some European intelligence service that shared their work with the Washington Post. And these documents indicated that John Dugan began working directly with Russia's military intelligence directorate, the GRU, corresponding with a handler in the agency's unit that oversees sabotage, political interference, operations, and cyber warfare targeting the West. John Dugan is also subsidized and directed by another entity in Russia,
Russia, an institute founded by a man who happens to share his surname, albeit spelled differently,
I'm talking about far-right imperialist philosopher Alexander Dugin. And our American Dugin even filmed propaganda videos from occupied Ukraine together with the Russian Dugin's daughter before she was killed in a car bombing in August 2022. Record show and disinformation researchers argue that John Dugin is responsible for content on dozens of fake news websites with deliberately misleading names like DC Weekly, Chicago Chronicle, Atlanta Observer, and so on.
Lately, he's apparently started using a GRU-facilitated server and AI generator to create phony videos like the deepfake video showing one of Waltz's former students accusing him of sexual abuse. Now, the campaign to defame Waltz as a pedophile predates the deepfake I just described. Almost two weeks before that video started circulating online, Dugan, the American Dugan, appeared on a QAnon podcast to peddle a similar but separate false claim against the Minnesota governor.
So, this week, with just 10 or so days until the U.S. presidential election and all the legal fighting that will happen after that, let's talk about Russian propaganda, how it's evolved over the years, and how American social networks are responding, or not responding, in 2024. Welcome to The Naked Pravda. The Naked Pravda
Howdy, folks. I'm Kevin Rothrock, the managing editor of Medusa in English. This week, I spoke to Renee DiResta, the author of Invisible Rulers, The People Who Turn Lies Into Reality. She's also an associate research professor at Georgetown University's McCourt School of Public Policy.
and she's the former technical research manager at the Stanford Internet Observatory. Renee DiResta has spent many years studying how information spreads online, examining how rumors and propaganda proliferate amid emerging technologies that are always changing the landscape. Our conversation started, as many of my interviews do, with my confusion about how to pronounce certain words. ♪
What I wanted to ask you about first was this post I saw of yours on Blue Sky. Maybe you also put it on threads and I don't know where, but I saw it on...
I always want to say blue ski, but it's blue sky. I know this. But you say that Twitter used to work with researchers on attribution. You say since it had actual signal about, for example, the devices or networks of anonymous accounts like black insurrectionists. And I wonder, what does this mean when you say it had actual signal about so on? Like, what does that mean? So let me let me use a specific example. So in 2017,
2018, mostly, I did the investigation for one of the outside teams that the Senate Intelligence Committee tapped to look at the Internet Research Agency data sets that Alphabet, Meta, and Twitter had turned over at the time. And this was the sort of canonical IRA data set. And one of the things that was very interesting about it was that Twitter turned over a spreadsheet that detailed device IDs.
And it had just metadata about the accounts themselves. So, for example, they were using Russian beeline phones, right? So that was the carrier that they were using. And so this was a very interesting bit of signal because it was one of these scenarios where
The accounts had names in them like Georgia News Network or whatever it was, or things that spoke to some regionalized part of the United States. But they were running on beeline characters and they had beeline as the carrier and they had particular device IDs. So these are the sorts of things where as an outside researcher, when that data set was turned over, it was like a little glimpse of some of the different things that Twitter had available. And there are other ways that...
When you have access to things like device IDs, you can see, are they running multiple accounts on one device? Just different types of signal. Who are they talking to in their DMs? Things that nobody on the outside would be privy to, but if they're doing an investigation into coordinated inauthentic behavior, they do have more information about the behavior of the accounts, what networks they're connecting to. Are they using VPNs? These sorts of things. And so you do see meta and...
all of the major social platforms for a while from around 20, I'd say 2019 to 2022 or so was like the kind of peak of this process. They would work with outside academic institutions like the Stanford Internet Observatory, where either we or they would see some collection of fake accounts and there would be a conversation about them. Either they would reach out and they would say, we have these networks we think is part of a coordinated and authentic behavior campaign.
we would look at them as independent outside observers. We would potentially go looking to see where else on the internet they were or what else they were doing. On the flip side, I remember a couple times we had some early signal from operations targeting...
It was Libya, Libya primarily. So various parts of Africa, but mostly Libya. And we would send a note and say, these look like inauthentic accounts. They're using these hashtags. We're observing these behaviors. And then Twitter would go and do an investigation. So this was how the process used to work over various periods of time. So we would never make an attribution independently on our own.
Because you didn't have enough data, you'd need what they could give. Yeah, right, because otherwise it's very content-based or behavioral-based where we can say, "This seems inauthentic. These accounts seem to be tweeting in unison. These accounts are following each other in a circle and they're all boosting this one guy." But that's the kind of thing that you can see domestics doing also, right? People who are actually who they claim they are, they're just groups of activists.
If you're looking at content, you also have a problem because, or maybe a gap, I should say, not a problem. You also have a gap because if you're just saying like, these accounts are putting out pro-Russian narratives. Well, I mean, okay, but what is a pro-Russian narrative, right? Russia actually often picks up
Or you have the tankies on the left, I suppose, too, right?
Yeah, you've got the tankies on the left. You have these sort of clusters of various opinions on the political right. And so you can't make an attribution simply based on content or behavior. You need a holistic picture of actors, behaviors, and content. And your best way to get this is by sharing signal with a platform that has visibility into the sorts of things that you don't.
Okay. And so what's, what has changed? There's no more signal or what's happening now? Well, Elon killed the program. Okay. All right.
Because academics were like woke censors and moderation is censorship and Russia is all the Russiagate hoax. There's no such thing. Russia has never run a fake account ever, completely just the fabrications, the fever dreams of woke academics. So that program is over now. So the data still exists, obviously, but it's just not shared with outsiders anymore? Exactly. So the third kind of piece in this equation is government, right?
Right. As far as other people who have signal into inauthentic accounts and networks. And there, too, the government gets its signal from the intelligence community. Maybe they have some sort of asset who's aware of what's happening in a particular region. Maybe they have some signal based on, again,
Again, penetration into some sort of network, right? There's a variety of ways in which government has signal. The relationship between government and platforms was reframed as some sort of, again, censorious woke cabal, you know, in which the
The act of sharing information of the government, like for example, the FBI, the State Department Global Engagement Center, sending over a tip and saying, hey, we think these are inauthentic fake accounts, that was then reframed as a censorship list from the government. And one reason for that, in some cases, the government lists are sloppy. And so it
Again, they're not intended to be the final, like, here's the be all end all canonical list that you must take down. It's here is the signal that we have. You internally should go and look at it and make your own determination about what you're going to take down, what you're going to leave up, whether it's authentic, whether it's inauthentic, based on that signal that you have. In November of 2023...
Meta released kind of a remarkable statement. It gave a comment to the Washington Post around the time that it released what they call quarterly threat reports, where they're releasing their inauthentic network takedowns for the last three months. And it said, the government no longer talks to us. Right. And so this was a very interesting statement.
thing to see Metis say, because it was basically saying because of the chilling effect of some of these lawsuits that implied that any communication was coercive jawboning, the government simply exited the space and exited that form of relationship and information sharing. Because the government was afraid of getting sued? The government was being sued. The United States government was being sued. So there was the Murthy v. Missouri case,
sometimes called Missouri v. Biden, depending on at what stage in the legal process you go look for it. This made the argument that the government was requesting content takedowns and that this was some sort of coercive illegal action. This point of view also appeared quite a bit in the Twitter files where...
The various Twitter files authors could see those lists that were going from the FBI or the GEC to the Global Engagement Center of the State Department to the platforms. And again, what you see most often in those conversations is the platform saying, like, thank you very much, and then taking down the things that were actually inauthentic and leaving up the things that were real Americans or real people of whatever nationality that were not inauthentic troll accounts.
And so you see the platform teasing those lists apart, making that determination. And then again, the culmination of that, the end state, is that Twitter used to then release the takedown data to academics like me and my team, and we would do an independent assessment and we would write up a report on what we saw in those takedowns.
And for a while, they were putting up the data sets publicly so anyone could access them. And what they did was they just hashed the usernames, meaning they turned it into a kind of a string of numbers and letters so that you had some continuity. You could see that this account had replied to that account, but you couldn't see their names.
I also asked Renee about the differences in institutional culture and temperament between the social network's integrity teams, the government, and independent researchers. Was everyone on the same page when it came to targeting suspicious content? So we were not ever involved in the conversation between the platforms and the government. I have no idea what those relationships were like. Okay, so they were happening kind of, it was dual traffic. That was happening separately. Occasionally there would be...
I remember in 2019, there was a fairly major set of
two inauthentic networks that were sort of released to the public simultaneously. One was linked to Progozhin, but it was not the Internet Research Agency. Nobody wanted to say it at the time because it was very, very, very hard to make a concrete attribution to like which of Progozhin's, you know, 30 different media outlets, troll factories and whatever else, you know, his entire sort of constellation of efforts. But it seemed to be tied to the Wagner Group.
And it was the first time that we had seen the Wagner Group. It was operations that were happening in theaters where Wagner was active, right, where there was sort of kinetic action happening. They were there. And then there was this set of sort of propaganda and inauthentic accounts and pages. And they seemed to be linked to Wagner Group. But it was a very complex attribution to make. And so, again, what you see is in our writing,
We're using phrases like entities associated with the Evgeny Prokhorin, right? Because you don't want to get it wrong. You don't want to say IRA and it's Wagner. It seems stupid maybe to the outside person they don't care, but in academia we do strive to be precise about these things.
which is why I noticed this language around attribution in the articles that are coming out now. But the other thing, just to finish that part of the story, this was almost simultaneous with a report that I did also for the Senate Intelligence Committee looking at the GRU, so looking at military intelligence. And as these two reports were going to be released to the public, as this was sort of like the next big sets of Russian interference operations following the Internet Research Agency's efforts,
We did communicate with the Senate around the, I think, if I remember correctly, we did a joint briefing with Facebook because Facebook had also participated into that investigation into the Wagner Group accounts. So every now and then, I think maybe once the conversation was simultaneous, and that was more in the form of an outbound briefing. But otherwise, you never had the sort of three entities communicating simultaneously. It just wasn't how it worked. Right.
Are there consistently things in data that are key to making those attributions? Like, is it like, oh, like you mentioned before, that when you had signal, I don't know if I'm using that phrase correctly. When you had signal. I mean, I just mean it colloquially. I don't mean it in like, no, I think the NSA probably uses it differently than I do. I still feel like I need to like practice using this word a little bit, but.
bit. But when you have the information that shows, like you said, device ID numbers and the phone lines that they're using or the telecom companies, is that typically what you're going to make these attributions of foreign? We never have that. No, just to be clear, academia, we never have that.
They weren't giving you that? That is completely... No, never. Never, never, never. We only got the content. We're just getting the content. We're getting the tweets. The tweets, the accounts. Essentially, what... So then what then... What has stopped in terms of being released to researchers then? So... Oh, okay. Sorry. Maybe I'm not being clear here. So...
Let's use a specific example. I'll pick Libya as this example because there were some inauthentic activity and a hashtag related to Maxim Shugaili. I don't know if you remember Shugaili. He was one of these sort of people who was on the ground in Libya, actually went to jail. He was one of Prigozhin's kind of goon types. Maxim Shugaili is a political strategist who worked for Yevgeny Prigozhin in Africa, advising political candidates considered to be pro-Russian in various elections.
Officially, he's employed as a sociologist for a foundation whose founder was the former editor-in-chief at one of Prigozhin's propaganda websites. Nshukule has always denied any special knowledge of the African operations of Prigozhin's mercenary unit, Wagner Group.
In July 2019, Shugalei and a colleague were arrested in Libya on suspicion of secretly meeting and conspiring with the son of the assassinated Libyan leader, Muammar Gaddafi. And this son happens to be wanted for alleged war crimes by the International Criminal Court. But he will nevertheless compete for Libya's presidency this December thanks to a recent appellate court ruling. Shugalei and his colleague were eventually released in December 2020. But less than four years later, just last month in fact,
The duo were arrested again, this time at an airport in Chad, reportedly on suspicion of continued espionage activity. Allegations that Chougulé's research foundation, in turn, blames on Western reporting that links them to Prigozhin and Russia's influence efforts in Africa. At any rate, Chougulé and three other suspects are all behind bars at this very moment.
Anyway, they made this video. They made a whole, you know, Rambo style movie about Shugali. And people began in Libyan Twitter to tweet about it. And this was something that seemed a little bit in the realm of the uncanny valley, how many people in Libya really care about Maxim Shugali.
So looking at those accounts, what you start to see is like the content is just incongruous. Some of the accounts are brand new. Other accounts, their tweets are scrubbed, but they forgot to scrub the likes and they used to engage with completely different types of content.
So we have there a lot of times one will tweet and the others in the network will amplify the follower. Following ring is a very kind of closed circle. They don't have very much penetration into the broader community. But in the sort of relatively low volume Libya Twitter sphere, they were able to make this thing trend. And so it was it attracted the attention of a researcher in Libya.
who reached out to us, an occasional collaborator, and said, what do you guys think about this? We looked. We said, yeah, this seems Russian, actually. Shigeli being the red flag. And this is the sort of thing where we would never then write a report saying these are Russian accounts. Instead, we sent a note to Twitter and we said, we believe we have an inauthentic network operating in these languages, posting about these topics. They appear to be connected to these things.
we would then, of course, take that content because if they're in one place, they're usually in other places. So we went and we looked on Facebook, you know, oh, what do you know? There are new pages about the Shigali movie, right? So these are the sorts of things where you sort of follow the narrative trail, look at the hashtags, look at the accounts, the personas, the domains, domain registration, maybe shared tags on, you know, AdWords or whatever. But
We can't say anything more than there is a coordinated network doing this type of influence, right? Having this, trying to make Shugali trend. In order to get beyond that, this is where Twitter's response is, okay, thank you. And then they go off and they do their thing. And as I recall, this was when they...
We're still releasing the data sets to the public. So we would occasionally get 48 hours to two weeks or so of advanced access to these data sets in which this was part of a program called the Twitter Moderation Research Consortium.
In which we would have this early access and we would write an independent report. So Twitter goes and looks and they say, okay, we attribute these accounts to Russia using whatever signal they have. Again, maybe they know where they're logging in from device IDs, what was purchased, who they're connected to. That part of the process is a bit opaque to those of us on the outside.
They came back and they said, here is what we see as the contours of the network. Facebook does the same thing. We write an analysis so that it's digested for the public through a lens that is not a tech company. And then we release our report. And that was essentially the process that was how the Twitter Moderation Research Consortium worked. Release it simultaneously, basically? Usually they would do the takedown first and then...
So the accounts would come down. Once the accounts came down, we also lost access to them ourselves unless we had this engagement with them. So this is where you do hear frustration from researchers and members of the public who didn't participate in these kinds of programs that all of a sudden these networks would just disappear. And this is why for a while Twitter did make an effort to release these networks to the public, again, just hashing the usernames out.
But they began to step away from that because they became concerned that authoritarian governments might use those data sets in some way. Or this was my understanding of why they began to back away from the full public release. That was before Elon bought the company that that had happened. Is there a reason to believe they could have used those data sets somehow? I think...
I think, you know, I really don't know. There are certain areas where... But is the idea that they would have learned how to hide better or that they would have gone after... Possibly. Though, I mean, though you'd think that they would have that indicator themselves when their accounts come down. They already know the device ID. Yeah.
Well, actually, you'd be surprised. One of the things that we saw quite a bit, and this is where, let me get even more granular, we're talking about attribution to the country level or maybe sort of a notorious figure like Prokosian who everybody, sort of like an avowed digital propagandist, everybody knows he runs these things. What you start to see happen over time
is that you start to see governments subcontract out their influence operations. So in Egypt, for example, they begin, and Saudi Arabia, they begin to use just digital marketing agencies. So the kind of people that a famous celebrity might use to manage their social media accounts, you start to see these agents of the state wanting plausible deniability, or they don't want to bother trying to spin up an entire in-house botnet.
So they go and they begin to contract with private firms, sometimes newspapers, sometimes media properties. And so the attribution there, again, is something we never could have done ourselves. But you see the attribution that the platform makes to a very particular marketing agency. And again, that's presumably because they have some visibility into
Maybe other legitimate accounts that this entity is running share some sort of IP or infrastructure in some way, device IDs maybe, even who knows, with the inauthentic network. And so they're able to make that attribution quite granularly. And if you look at io.stanford.edu,
You can read all of our, you know, very long, in some cases, reports over the years about every conceivable major nation state player in the information space. And you can see this movement into using outsourced mercenaries. And that's one of the reasons why this collaborative attribution is also very important. Just this sort of thing that it's very hard to see as an outsider.
One of the core ideas in Renee's book, Invisible Rulers, is the concept of bespoke realities, where she argues that people now get to choose their own epistemology, their own criteria for truth, which upends the shared consensus reality that once
once upon a time forced people to deliberate with each other and bridge points of disagreement, she writes. Today, it's a choose-your-own-adventure ride hypercharged by the internet, and agenda-driven propaganda is colliding with all this unverified information, this gossip, and the speculation. So if we're all now swimming in this soup of rumors, this disinformation,
I asked Renee what it means for the future of social networks content moderation. So one of the things that is very interesting about state actors is they're incredibly persistent. There is no downside, right? What is the cost? Somebody takes down your accounts. Okay, well, then you go spin up new ones. Your mercenary agency loses its accounts. Okay, well, you know, you go hire the other firm over there. Or you do what we're seeing them do now, which is as social media companies grow,
Okay, not all social media companies, let's be clear, Twitter no longer does this. But as Meta and YouTube and some of the others consistently impose cost and take those accounts down, they just move to the social media platforms where that cost is not imposed. So they're incentivized, in fact, to be on Truth Social, they're incentivized to be on Twitter, and you see them adapt. Anytime there's a shift in the information environment, they update their tactics. So that's the dynamic of state actors.
They also realize, though, that sometimes paying real influencers, and this is what you see in the Tenet Media case that DOJ unsealed an indictment for about maybe a month ago now, you see them have the realization that maybe fighting the war of whack-a-mole on social media, even if they can establish themselves in hospitable places, they need to supplement that with something else. They need a backstop.
Last month, the U.S. Justice Department charged two Moscow-based RT employees, both Russian nationals, with conspiracy to violate America's Foreign Agents Registration Act, FARA. And
and with conspiracy to launder roughly $9.7 million. The two rushed today's staffers allegedly used fake personas and shell companies to orchestrate a massive scheme to influence the American public by secretly planting and financing a right-wing online content creation company based in Tennessee that we now know is called Tenet Media. That company shut down shortly after these allegations were made public.
Now, the content itself from Tenet wasn't anything special. According to the DOJ, it was another general So Discord campaign focused on commentary on events and issues in the U.S., such as immigration, inflation, and other topics related to domestic and foreign policy. The Justice Department explained that the views expressed in the videos were not uniform, but most were directed to the publicly stated goals of the government of Russia and RT to amplify domestic divisions in the United States.
And we need so much help there, wouldn't you say, my fellow Americans? And so having that, you know, essentially purchasing influencer time, purchasing an influencer's voice is another very, very established old propaganda strategy. So what we're really seeing is an adaptation of old strategies to new structures, to new information environments. But ultimately, the kind of content is very, very similar to the kind of efforts that,
you have seen state actors try to go for in the past, which is to find social divisions and to kind of cleave them apart. And the dynamic where I think this intersects with the spoke reality is the idea that in
individual people can choose their own reality. They can choose what influencers they trust, what media they follow, what people they engage with, where they spend their time online. They can construct this space where what they hear and what they see is what they want to hear and see, and that's algorithmically reinforced. You see influencers beginning to target those spaces to reach for those
quite distinct communities. You see Russia go after QAnon, right? Really try to amplify QAnon content, try to join QAnon communities. You see them do this with Pizzagate early on. Now we're seeing them do this with, you know, voter fraud. These are not unique opinions that Russia has injected into the American political discourse. They're just picking up on these highly conspiratorial, extremely distrustful spaces where
If they get the right person to say it, like a trusted influencer, then it's much more likely to be received because the people who are in those communities have chosen to be in those very insular information spaces. And so if people are choosing these spaces, I mean, from a social studies point of view, I can see the obvious dangers of this. But if you're talking to, let's say, some
QAnon person or in this case someone who believes that Harris's running mate is you know corrupt and sexual deviant or I don't know what and he hears this he gets this fake story and you know you can prove with whatever analysis that it's come from this crazy Florida ex-Florida cop who now lives in Moscow if that person then says well I don't you know I live in my bespoke reality and then I can use that terminology obviously but if they essentially say that like what's the where what are we left with at the end of that
That is the question, unfortunately. I mean, you're asking the right question because that is what happens at this point. You know, you see this. Do you remember three weeks ago now we had Hurricane Milton, I think two, three weeks ago. Time moves very quickly. We also had Hurricane Helene, I think also right around a month ago, maybe three weeks ago now. And
In those cases, you had content that went viral that purported to be, for example, there was a visual of a little girl with a puppy in a locker room.
life jacket on a boat, you know, in floodwaters. And it's very sad. You know, she looks very sad. It's very, very sad picture. And it's framed as like, these are the people the Biden administration is abandoning, right? So this is tweeted by right wing influencers, very, very prominent right wing political elites. Also, I think the RNC chair winds up in this, you know, caught up in this in this narrative.
And you have these prominent figures who take this image. Of course, it turns out it's AI generated. This girl doesn't exist. Neither does the puppy. Neither does a life jacket or any of it. And when people begin to point that out, they don't say, whoops, got it wrong. They say,
Well, the specific image might be AI generated, but the point is the same, right? So the goalposts move. Now we're not talking about the action. Now there is literally no victim. There is no girl to be upset about. So now we're moving to being just upset in the general sense of this purported idea that the Biden administration has done this thing. And that's because it is a propaganda campaign. You do see so much of this. It's not necessarily...
the specific image that's the issue. It's what the image is intended to convey. I am not going to get into philosophy here, but you have this idea between what is real and what is true, right? And that, I think, is a divide that keeps popping up. Well, yes, this image isn't real, but what it is trying to say is true, and that happens constantly.
And I feel like that also happened in the discourse around Springfield, Ohio, and this theory that immigrants are eating the pets, they're eating the cats, they're eating the ducks. Turns out, of course, they're not eating anything. They're just living their lives. But that doesn't matter. No, what we were really trying to say when we said those things, made those memes and made those accusations was that the problem is immigrants are taking resources. So we're using this theory.
artificial reality to express a deeper truth. And that pivot happens uncannily often. But what you see is state actors, they're not the ones creating any of this. They're just in there as accelerants. So you do see the Kremlin-linked accounts. You do see...
who was it? Maybe it was one of the Kremlin officials. He came out and he said, "Well, see, the problem here is that if the US wasn't sending money to Ukraine, it would have had more money for these hurricane victims."
And so they sort of, again, they're doing the same thing. They're pivoting the issue. They're taking the crisis. They're kind of squatting on the terms that people are searching for to express a point of view that's advantageous to them. And I think he got about 19,000, I think, engagements, like 19,000 likes on that tweet. It did get, you know, it got some pickup. The stuff is getting pickup now.
I think Rene is referring here to a tweet from former Russian president and current National Security Council deputy chair Dmitry Medvedev, who in the past few years has embraced this sort of militant royal jester's role that the late Vladimir Zhirinovsky once played. Anyway, on October 13th, Medvedev tweeted, No cash to clear up after Hurricane Milton in Florida. No cash for French farmers. No cash to revive German industry. However...
They've found enough cash to support a crazy drunken mob of Ukrainian thugs in Europe and churn out weapons to exterminate Slavs in the military conflict.
Medvedev has learned to talk like many disgruntled Americans, and I assume many disgruntled Europeans. For example, three days after that tweet, when the Biden administration announced a new $425 million aid package for Ukraine, firebrand right-wing congresswoman Marjorie Taylor Greene wrote something similar on Instagram. Here's what she said. This is why the American people hate the government so much. Americans can hardly get a measly $750 after Hurricane Helene, and this America last globalist is...
And this America last globalist, as opposed to an America first non-globalist, sorry, I'm explaining a bit here as I read, is writing another check to Ukraine for $425 million. This is now making Americans actually resent Ukraine. That was Marjorie Taylor Greene, ladies and gentlemen. I think the broader point is just that you have these
this handoff, this movement from the state actor agitators who see opportunities and leverage them, and Russia is very good at that, and the fact that simultaneously it's not their fake accounts that matter. It's the amplification of what is essentially real American sentiments, and that is why it's so hard to address. Because as you note, the upset, the fear, the anger is real. And
This is why I think quite often we misdiagnose it as like a social media problem or a truth problem when it's really more of a trust, a crisis of trust and a crisis of institutional legitimacy.
I know that you mentioned the marketing industry a few times in your book, and I wonder, what do you consider to be the marketing industry's sort of most valuable contribution to studying propaganda? And is it a great gift that it's given a bunch of knowledge, or is it pernicious? Has it kind of distorted what should be? Back in the 1920s in the U.S., marketing and propaganda were not really seen as all that distinct.
So the title of the book, Invisible Rulers, comes from a phrase from Edward Bernays, who was a propagandist for the U.S. government and the Creel Commission in World War I. So his job was to sell the war to the American public, right? And this was considered, of course, a patriotic act. This was the view of propaganda at the time was, of course, the government is propagandizing to its own citizens. How else would the citizens understand these complicated issues? And Bernays describes Invisible Rulers as...
there are people who have the power to mold the public mind that are largely unseen by the public. And he says that it is not, in fact, the president or the figurehead who is doing the molding of the public mind. It is this collection of advisors and speechwriters and people who understand rhetoric and people who understand
how to sway public opinion, how to write oratory that, you know, moves audiences. And so he writes this book, Propaganda, in I think it was 1929. And most of the anecdotes in there are about marketing. Most of them are about the idea that he calls it public relations. You know, you become sort of known as the father of public relations, that companies should be doing this too. Because yes, you can do advertising, you know, by talking about how great your product is.
But his insight that he gets out in the book is you need to be appealing to people in their identity as members of a group. And that is how you sell them a product. You don't talk about how great the cigarette is. You talk about how liberated women smoke. Right. And then you hand them that particular brand. But then by that point, the brand is filling a need that is tied to their identity and how they see themselves in society. I am a liberated woman. Therefore, I will smoke.
oh, look, Virginia Slims is the brand that I should be smoking, right? And so he goes into a lot of these case studies, essentially linking this idea that the role of a good marketer is to appeal to people as in that capacity, their membership in a group identity, which is how in the 1960s, you see Jacques Ellul and some of the big, you know, propaganda theory experts who come a little bit later, making that very same argument that it is the individual's
presence as a member of the crowd, an activatable member of the crowd, that is what the propagandist is actually seeking to connect with. The idea that they're trying to change opinions is somewhat outdated, is that they're trying to activate the faction that already holds those opinions. And so where I wanted to go with it in my own book was looking at when you have an infrastructure that is like tailor made for that
when the very term influencer came because marketers wanted to reach, like they wanted mommy bloggers to sell cereal to people, not a celebrity, a nice accessible mommy blogger.
who has a rapport with her audience and some, you know, hey guys, I'm just like you and I feed my kid these Cheerios every morning. And so you, you know, again, appealing to you and your membership in a group, your membership in a community. And it's a very persuasive, very impactful thing. And this is why I know that when you say the word propaganda, people think of,
Nazis and great power and, you know, the US government, Russia, the UK, Britain. But I think it's that capacity that's been democratized on an ecosystem that is structurally designed to help sell ads, right, to help market to people in their role as members of a group. And I think it's a really interesting phenomenon.
Okay. Are you skeptical? Well, I mean, no, it's just a very big, big thing. And another thing that you get out in your book is you talk about how, you know, there's that traditional kind of outdated idea of propaganda. And then there's this new sort of democratized world of propaganda and that it's not necessarily better. And in some ways, I mean, doesn't it seem better, though, that it's democratized? Isn't that like good? This is a really interesting question. I am.
Even if they're both bad. It's good democracy. I can see that. I can see that. I guess where I get hung up a little bit is that I'm an institutionalist, right? I believe that society doesn't function without institutions. I believe you need them. And so I think a colleague of mine jokes around that the question is, are you more comfortable with tyranny or chaos, right? These are
It's essentially top down, right? You have that capacity for tyranny. When everything is fragmented niches and you can only establish quote unquote truth within the relative confines of one niche and those truths don't transfer across niches, then I think you wind up in chaos. And so yes, both bad. But the thing that I find troubling about chaos is that it prevents the kind of consensus, right? The sort of
consensus reality, if you will, that is required for solving collective problems. So I feel like it leads to a devolution of the kind of society that we've known for a few centuries now. And I think I don't see that destabilization. I don't see anything positive. You see the accelerationists and stuff and you're like, okay, so you burn it all down and then what magically rises from the ashes? And that's where maybe I suffer from a dearth of imagination, but
This is where I feel like, you know, maybe the top down is like a little better from that standpoint. I really don't know. Thanks for tuning in, folks. This has been The Naked Pravda, a podcast from Meduza in English. Remember that undesirable status back in Russia means our entire news outlet now relies on readers and listeners around the world to support our work. Please visit our website for information about how to become a contributor with one-time or recurring pledges. Thanks again. Until next week. The Naked Pravda.
♪