On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org.com.
On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org slash bots. It's on! It's on!
Hi, everyone, from New York Magazine and the Vox Media Podcast Network. This is On with Kara Swisher, and I'm Kara Swisher. And I'm Naima Raza. Kara, you sound under the weather. This is my October cold, which comes into a November bronchitis, and then maybe a
December flu. When you have toddlers, this is what happens. Yes, tis the season. And when you work with Cara, this is what happens. I think I'm just at the beginning, just at the early stages of this. Thank you, Cara, for our Monday hangout. Thank Clara and Saul and their preschool class. It's true. On Thursday, we had Christiana Ompour on the show to discuss the conflict in the Middle East. And this is obviously a hot war, but it's also an information war.
And that's what we wanted to talk about today, that organizations and people are trying to garner support for their cause. There's a flood of information and there's also a flood of disinformation that's entered the sphere. Yeah, every war has information battles within it, whether it's the radio for the Nazis or, you know, television in Vietnam certainly played a big role when people saw the pictures coming over. And this one's even sort of more problematic because you don't know what's real and what's not.
And you don't know where it's coming from. And people also are making money from it. Yes, the monetization is a real issue. And also just any war presents opportunity for opportunists. So you see this in physical conflicts where people come in for arms. And you're seeing here a number of Islamophobic bot accounts of Indian origin coming on, as well as Chinese and Russian propaganda here.
trying to wedge and monetize. It really just is propaganda, just writ large and using AI tools and all kinds of digital tools. And the problem is most people are now getting their information right now. And it can be used in all kinds of ways with video from previous things, with genuine video from another conflict that is used here. There's even people on TikTok who are not
in the region who are putting on helmets and pretending they are. It's really quite sick in a lot of ways, and at the same time makes perfect sense. Even in this hog of war over the hospital strike, the conversation all of a sudden moves to this one thing. It moves to this very focused conversation about the hospital, and it's hard to broaden out. It is, especially because there was some real...
The things that happen, real things, which is including trying to get aid in there, which is, I think that's really important. And so, look, this is it was a tragedy. Absolutely. But what it becomes is not the actual thing. It becomes an argument over the thing. And it's not really about people losing their lives. It's about people.
It's about scoring a point for your team. And that's really, really not the point. It's that people died. And not just that people died in this one blast, but people are continuing to die. And if we just focus on trying to understand one thing, any one item of this war, we can actually be...
you know, completely confused. It's like all social media. It's a distraction. And again, I don't mean to minimize it, but people do this on, go down these alleys on every topic known to man. Some of them are very benign, like cooking or whatever, like how to cook this particular thing. But some of them really, people go down these alleys and it's designed to make you crazy. And that's what it does. Yeah. And it's also a time where there's so much disinformation and so much awareness of
of disinformation that this phenomenon, the liar's dividend occurs. If anything can be fake, then nothing can be real. And so anything can be disregarded as fake. Right, which is the whole point of this is to create, you know, it's called the fog of war in the old days. And now it's really, really foggy. And we should mention here how the platforms have
dramatically in the last year? What pressure is there under? They've pulled back quite considerably. They've had such a hard time. And listen, let me give them one thing. It is really hard, but this is the business they're in. They have to take responsibility for what's happening here, and they just have never wanted to. And they have more tools, for sure, but they've cut back on staff. But it's become clear it's a flood of information, not just by real people, but by bots. And again, AI has, this genre of AI has...
lifted this into a quantum level of difficulty to deal with. And they really don't have the intent to. They pretend they're not media companies, but they are. And media companies which make mistakes, look, a lot of media companies have made, traditional ones have made mistakes here, but they self-correct pretty quickly and get better. Meta had to apologize after inserting the word terrorist into the translation of bios of some Palestinian Instagram users.
They said that was an error of auto-translation. They have issues all the time, but they're really having problems here, as reported by the BBC and by The Guardian and others. And then per reporting from Mike Isaacs of The New York Times and others, thousands of users posting pro-Palestine content have also reported that their posts have either been suppressed or removed, even when they're not in violation of the platform standard.
Yeah. Meta, of course, is under extreme pressure here and had a response to this issue. They had a typical Facebook statement, long and confusing, but they said the part that's important is we apply these policies equally around the world and there's no truth to the suggestion we're deliberately suppressing voice. We can make errors and that is why we offer an appeals process for people to tell us
when they think we've made the wrong decision so we can look into it. I mean, that's like the alley to end all alleys. I don't describe them nefarious things. It's usually that they can't handle it. Yeah, they're also under extreme pressure from the Europeans on the DSA to get their hands around disinformation. They're trying to avoid, I presume...
violent content being on the platform. I've asked whether it's possible that those policies have disproportionately affected certain groups unintentionally, and we're unable to get specific answers to those questions. You know, everybody has a problem with these platforms because they don't do a good job. That's really the situation. And so that's going to always come up as a problem because they're inadequate to the task, which I've been saying for about a decade now, but we'll see.
But we brought on a panel of great guests today to speak to us about this and help make sense of it. Renee DiRusta is with the Stanford Internet Observatory. She's been researching how disinformation spreads online for years and has previously worked with the Senate Select Committee on Intelligence, as well as Congress and the State Department. She's also under attack by a lot of
mostly Republican groups, about the work she's doing. It's part of a broad pattern of trying to chill academic research into the area. Katie Harbeth is our second guest, is a former Republican strategist who spent a decade at the public policy team at Facebook, now Meta, where she led the team that managed elections on the platform, I think, until 2021. She also has her own policy advisory firm,
called Anchor Change. And our final guest is Cheyenne Sardarizadeh. He's a senior journalist at BBC Verify where he covers disinformation, conspiracy theories, and extremism. He has not slept a lot recently. He's someone whose work I've followed for a while. It really takes an international lens and has helped debunk misinformation, disinformation, um, everywhere from Iran to Europe, uh,
and has a very global perspective. Yeah, a lot of media companies now have disinformation reporters because it's almost, it's impossible to keep up. He never probably speaks because there's always an issue that you have to look into. We'll take a quick break and we'll be back with the panel with Renee, Katie, and Cheyenne. This episode is brought to you by Shopify.
Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech.
Katie, Renee, Cheyenne, thank you all for joining me. Thanks for having us. I want to start by asking each of you to tell me whether the first weeks of this war have more, less, or the same amount of misinformation and disinformation, especially visuals, which is what's being focused on here as the first weeks of the Ukraine war, and why you think that is. Katie first, then Cheyenne, then Renee.
To me, it seems like more, particularly given the coordination Hamas had on social media and ready to be pushing out their own videos and images of this, plus all of everybody kind of jumping on it of adding videos and images from past conflicts and other things. So it just seems like from what I'm hearing from others that the volume is more, though Rene and Cheyenne may have different opinions on that. Cheyenne? I think it would be difficult to, because, you know, I don't have all the sort of...
Data from from both conflicts to be able to compare and say well this one was more this one was less I think what I would say is it's definitely been overwhelming because I was doing the exact same thing in the in the first few days of the Ukraine conflict and There was a ton of misleading the videos and images and it's not just by the way visual posts it's it's also completely unsourced unevidenced claims and
You know, when something breaking is developing, and in this case being a war, that can actually have consequences. So I would say it's been quite overwhelming. I would say probably more or less similar to the Ukraine war. I can't definitively say which one was more, but both of them were bad enough, basically.
Renee? I would say we don't know. Um, reason being we don't have Twitter API access anymore. Right. And so explain what that is for people who don't understand that. Yeah. So back in the olden days, um, there was really good researcher relationships between academic institutions and Twitter. And we had access to what we might call the fire hose, right? Various types of fire hoses. I'm not going to get into the details, but ways that we could, um, build tools, create dashboards, and just ingest data directly from the company, uh,
And because it was an academic project, we didn't have to pay for it. The kind of data access that we had now costs over $42,000 a month. So a lot of academic institutions have backed out of observing Twitter, which means that our focus has really been on Telegram. That's not entirely bad, right? That's where a lot of the content that is for the impacted populations, they're not necessarily sitting on Twitter. So spending our time on Telegram isn't a bad thing.
Elon very significantly changed curation. So there's what's visible and then there's volume. And those are not necessarily the same things. What's visible is really decidedly different. And I think maybe Cheyenne would agree of often a very significantly worse quality in terms of accuracy.
Cheyenne, we've seen video games and TikToks from old concerts repurposed as misinformation online. We've also seen rumors, as you just noted. Give us a rundown of what you think are the five most viral pieces of dis- and misinformation, and tell us what you know about them. Obviously, a lot of attention was given to the unsubstantiated claims that 40 babies died.
or beheaded, but I'd like you to pick out your own. I would say that the sort of the most viral stuff that I've seen that I've been logging in the last two weeks has been basically video.
that is either unrelated to what's been going on in the last two weeks underground in either Israel or Gaza, you know, could be from the past conflicts. Obviously, Israel and Hamas have been involved in several conflicts just in the last 10 or 15 years. So either from those past conflicts or from the war in Syria or from the war in Ukraine or from military exercises,
In the case of TikTok, actually, it's become really, really fashionable now that when a conflict happens somewhere, you just say you're running live streams of that conflict and you either use video of past conflicts or you use a YouTube video of military exercises and actually make money off of it.
And put on a helmet. I've noticed some people are not there putting on helmets, correct? Absolutely. And actually make money off of it. That's the important bit. So most of the stuff that I have seen, and I've seen stuff from both sides, by the way, it's not been one-sided at all. I've seen claims from both sides, from both directions, also supporters of
broad parts of this conflict, sharing all sorts of completely untrue material. And the most important thing for me personally is that this is not fringe stuff and this is what people need to know. Meaning what? We're not talking about stuff that is being shared by 50 people or 100 people and, you know,
100 retweets, 200 likes. We're talking about material that's been viewed tens of millions of times on platforms like X, formerly Twitter, TikTok, YouTube, Facebook, Instagram. You know, we're not living in the 1950s and 1960s anymore. People these days don't necessarily sit in front of a TV and watch their sort of nightly bulletin to find out what's happening around the world. They go on the internet, they go on social media, they look at their feeds, they want to get updates constantly, particularly when there's an event of this magnitude. Right.
So quite a lot of the visual evidence that they've been getting, unfortunately, online in the last two weeks has been completely false. There's also been quite a lot of video that's been shared that's actually genuine and from the last two weeks and has been helping us, journalists who want to investigate what's going on, for instance, with what happened at the hospital two nights ago. Right, we'll get to that in a minute. Everything that we've done has been based on footage that's been shared online that is genuine.
But the point is you have to verify first that that footage is genuine. And then while you're doing verification, quite a lot of stuff is actually untrue. And you see, you know, a piece of video that is from the Armor 3 video game, which is a military simulation video game, has 4 million views on TikTok.
So Katie, there's talk of this being a TikTok war. That's according to Bloomberg, showing a new role for the platform, which has more moderation than the other platforms. Now, of course, you worked at Facebook, now Meta for 10 years. Explain the platform shift and if and why it matters. Well, I think first and foremost, it's that, you know, for many, many, many years, platforms like Google, Facebook have built up their defenses in trying to be able to find some of this content. They're still having a lot of challenges, you know,
as well in terms of doing that. And as Cheyenne was mentioning and Renee too, verifying this content and working with fact checkers to verify it and then to decide whether you're going to de-amplify it, are you going to take it down? Who's doing it? What's their intent? And sharing it is also a very hard thing to do. And so a lot of these newer platforms are having to grapple with a lot of the questions that some of these legacy platforms have already kind of worked through and spent the years of refining their policies and their algorithms and everything to find.
Cheyenne, I do want to ask you about how you identify what is disinformation. How do you account for that? Well, I think the first and most important thing is to clarify, me and my colleagues, we only go for content that is viral. And then when we have a piece of video or an image or...
basically just a post online that is a claim that is either incendiary or has, you know, implications. What we want to know, first of all, you know, when was this piece of video filmed? Where does it come from? Who's the original source? Can we actually source the video to the person who filmed it, the platform where it was first shared? Because obviously you put something on Telegram or on WhatsApp, then it travels across platforms. So just because you see it on TikTok or on Instagram doesn't mean that's where it came from. You have to source it.
You have to find out who filmed this piece of footage, who first posted it online, and then you have to contact them and talk to them because they probably have more context. Then the second thing is, is this actually the entire footage? Is there a longer version? Have other people been at the scene where this video was filmed? Meaning trying to manipulate it to look... Well, exactly. Has it been edited? And then the next thing is,
Is this actually current footage or is it old? So we have to go online and look on platforms like YouTube, Instagram, TikTok, you name it, and try to find... You do reverse image search, correct? Yes. So we take screen grabs of pieces of video with images. You don't need to do that. The image is there. You can just reverse search it. But with video, we take 5, 10, 20 screen grabs of a piece of video. And then we go online on several reverse image search tools, including Google, Yandex,
Bing, and then we try to find whether there are other examples of this video shared online in the past. Then if it is from these past two weeks, then you then want to investigate it properly and find out what it actually shows, who actually filmed it, where it came from, because sometimes you have genuine pieces of video that are either edited or deceptively manipulated or taken out of context.
So talk about the sources of disinformation on both sides and where they're coming from, because a lot of other actors have also gotten involved. There's a lot of anti-Palestinian disinformation and generally more Islamophobic content around this conflict coming from India. So where are the sources? I would say the vast majority of misinformation that I've seen has come from people who seem to have nothing to do with the conflict directly.
So it's just people online who are farming engagement, farming followers, farming influence, and in some cases trying to put as much outrageous, shocking content as they can to make money off of it. When it comes to two sides of the conflict being the government of Israel and Hamas,
We expect the two sides involved in a war to actually, because wars these days are not just fought on the ground. There's an information war as well that you need to win. So we expect the two sides to try to put whatever they can online, regardless of whether it's factual or not, to win the information war. So that's expected. And also for people who live either in Gaza or in Israel, again, because of all the atrocities that have happened in the last two weeks, you expect them to be emotional and obviously taking, you know,
Yeah, having opinions, etc. You absolutely expect that. And people, you know, obviously people have seen horrific stuff.
So, you know, I don't pay too much attention to somebody who's emotionally affected by this conflict putting something out that is misinformation. What is important to me is people who are not directly related to it, say somebody sat in America or sat in Great Britain or in China, and they're posting content that, in my view, is just for getting influence and engagement online. Now, apart from that, there's also more sort of nefarious misinformation or disinformation that is put out for political gain. One good example of it.
Last week there was a video that was posted online that quite a lot of people saw that had the branding, logo and style of BBC News. This one was actually fake, 100% fake. We didn't produce it, but it looked genuine and it said that we had reported that the weapons that Hamas militants used
on the 7th of October had come from the government of Ukraine or was weapons that had given to the government of Ukraine by Western powers that had been smuggled out of Ukraine and ended up in the hands of Hamas. There's zero evidence for it. We have not reported it. And then Dmitry Medvedev, who is the former Russian president, put out the same baseless claims online. So you have to think, why would someone go through that effort to produce a fake BBC video to say, well, Hamas militants got their weapons from the government of Ukraine that has nothing to do with this conflict?
Yeah, yeah. Why would they do that? I wonder, Cheyenne. I wonder why they would do that. I don't know. Maybe. I have some ideas. I want to get into one specific attack, the blast at the Al-Ali Baptist Hospital. None of us are experts on the airstrikes, but the blast and questions around it are metastasizing online. Katie, you've worked on elections where anyone can build whatever narrative suits their purposes. Is this common, what's happening here? It is, but I think one of the things...
that I'm seeing that's different is also just the confusion amongst mainstream news organizations in terms of, you know, the New York Times had a headline that I saw that, you know, initial reporting, trying to be first at this. And that helps to add to the confusion of what Cheyenne was saying was then when people are making fake videos...
of this, of using the branding and stuff of these news platforms, it continues to contribute to people just not trusting and not knowing what is true because we're all trying to figure it out in real time. And so that makes it much harder to verify what is or is not true from a social media company standpoint. But I think everybody, what you choose to amplify or not amplify is
and trying to figure that out while it's all happening in real time. To me right now, this just feels like it's just coming faster in higher volumes and more things happening that it's just a lot more facets to have to deal with than what I've necessarily seen in a particular election situation, unless it's something...
like January 6th, but even then this feels like even a higher stakes because the amount of gruesome images and that also has an impact on the people trying to moderate this, trying to cover this. There's an emotional aspect to this as well and a burnout that continues to happen as people, this continues to go on longer and longer.
Yeah. So, Renee, how do you look at this? Is this common from your perspective? You've seen hundreds of these over the years, I would assume. We have. I think what I would say is most different is the widespread democratization of democracy.
generative AI. That's what's really different here, right? That was what wasn't the case in February of 2022 during the initial Russia-Ukraine invasion. There was, you might recall, there was a lot of, you know, there were rumors, there were stories. You might recall Snake Island, right? Does everybody remember Snake Island? Snake Island. That was reported and then they hadn't died, but, you know, you had a, they had been taken hostage, but there was a whole
It was used for heroism.
the liar's dividend piece of this, where content that is real, you can say is faked because of the existence of the technology to fake it, right? And so it's this question of, you know, I see it as like the kind of collapse of consensus reality, right? You can pick which ones you're actually going to trust. You can pick which ones you're not going to trust. You can dismiss all the rest of it as like, oh, that's AI generated. And that's really, I think, fascinating.
For me, you know, I did follow actually the beheaded babies rumor quite closely, right? Again, because we were in the first responder telegram channels and
you know, you see images in there and you're like, well, it's not a, you know, there were very specific claims that were made 40, right. That's a very, very specific claim. There's no evidence of that. Right. You know, and so you're, you're trying to piece together, well, something clearly happened here. Like here's this image. And, um, and then, you know, of course the, the government tries to put out something showing like, here are a few instances of this happening. No, it wasn't 40, but here is this, here's this image. And then what wound up happening was, um, Ben,
Ben Shapiro retweeted the image. I don't know if you all kind of followed this micro controversy that happened where State of Israel, you know, official channels put out one of these images and Ben Shapiro shares it, right? Yeah. Saying this really did happen. Huge following. And,
And he has a massive following. And then people who didn't like Ben Shapiro, I think actually also on the right, which was sort of funny, went and took the image and ran it through AI or not.com. And those detectors are not particularly reliable. A lot of the time, there are a lot of false positives and false negatives. And so what wound up happening with that was there was a pixelated section of the image where they had blurred out a logo or something. And
the AI or not.com initially returned AI generated. Now, if you did a closer crop just of, unfortunately, the kind of majority of the content, right, the sort of deceased content
Then it returned that it was not AI, right? So if you had this one section in there. So this was the kind of example of then people... Yeah, everybody's an expert. Everybody's an expert. Yeah, and it turned into a whole thing. Oh, Ben Shapiro shared an AI-generated image. Oh, the government of Israel faked an image of a dead baby, right? And so there's all of these different channels of conversation around this where depending on who you trust or what you trust... So even in trying to explain it, you can get caught in...
the thing, especially if you have a feeling about Ren Shapiro. In any case, Cheyenne, your news organization has published an explainer on the blast, moving back to the blast, via BBC Verify, which is updated continually, which I appreciate. Explain what it is and what's concluded so far, but will it make any difference at all? Well,
Yeah, I mean, since that blast happened, we've been working sort of non-stop, sort of gathering every piece of video that we can and all the images, and we've been analysing them. Well, I've slept like, I think, three, four hours since that blast happened. We've been trying really hard to get to the truth of what happened because dozens and dozens of people have died. We don't know the exact number. Obviously, the Palestinian authorities say about 500. You know, that's their claim. But clearly, dozens of civilians died.
So every piece of video that we could, we've gathered. We've tried to analyze them. We've tried to geolocate them, make sure this piece of video was actually shot. We know that the blast from the live feed that went out by the Al Jazeera channel, and 100% is verified, that the blast happened at around 1859 Gaza time.
So we've tried to match that with every piece of footage that we've got to try and make sure that the footage that is being shared online was taken at that time. Definitely shows the hospital footage of which we have obviously from satellite imagery. And then try to find out what exactly happened. And we've contacted something like 20 different experts from, you know, with different views, different ideas. We've showed them not just the videos that were published at night,
because some of them obviously can't see everything. But also yesterday morning, there were tons of images and videos that came out. We know that the blast happened at the courtyard of the hospital. It didn't impact the main building. So whatever it was, it was the courtyard of the hospital in the car park. So there's a crater there that was then was some of the images and videos that we saw yesterday. It was a small crater. And also there are signs of obviously the impact of the blast on the cars and on some parts of the hospital, on the windows.
So we showed all of that that we had independently verified to experts. You know, at the moment, what we've been told, and obviously more evidence will come out. We've also got a reporter. There was a BBC reporter on the ground who's been to the scene and spoken to eyewitnesses. At the moment, it seems inconclusive.
For what we've been told by not all, but most of the experts that have spoken to us is that it doesn't at the moment seem consistent with the damage you would expect from an airstrike. But, you know, which direction it came from, who was exactly responsible at the moment from the experts that they've told that we've spoken to, they say still inconclusive. When something like this happens and obviously people are outraged.
They want easy, quick answers. It's important to say there are no easy, quick answers in a war zone with limited access for journalists. So just be patient. Facts will hopefully come out. And in some cases, we may never know all the details and all the facts surrounding it. Renee, once a conclusive answer is established, if possible, as Cheyenne says.
What's the responsibility do social media companies have to stop or at least de-amplify the reach of misinformation around such an event, given it has such real-world repercussions to upset and create, you know, real... There's protests all over the place. So what's their responsibility? Because this has so many echoes of incidents in Myanmar or elsewhere in the past. It reminds me, again, I feel like I'm on constant loop.
Well, I think the challenge is, you know, some people are going to see something go by once. They're going to form an opinion on it. They're not going to spend a whole lot of time thinking about it and then they'll just move on. Other people are going to really follow the story. I think in the particular case of the, you know,
bombing or missile misfire, whatever it turns out to be. A lot of people are following that and a lot of very prominent accounts are following it. So I think it is going to stay kind of at the top of the feed as people debate what happened around it. I think with social media, one of the real challenges is in a conflict situation like this,
People are looking for it, right? And so you're going to have to return something. And so I think the best thing that you can do is try to return authoritative sources, try to return people who are verified to be in the region, right? And that's hard to do. That really takes effort. We've seen platforms do it in the past. You know, Twitter is particularly...
That's the place that people would normally go for this sort of thing, but they don't have the staff to do any of the things that ordinarily would be done, right? To say, okay, who are the credible sources? Who are the people who are in region? Who are the official accounts? Maybe
maybe we shouldn't surface paid blue checks for this one. Maybe the armchair opinion of some rando commentator who paid me eight bucks is not the person that we should be putting at the top of the feed. That's a crazy idea, but that's the sort of thing that, you know, Twitter would have done in prior environments. I think it would return to authoritative through badges, fact checks. Right. And, you know, I think community notes is a great concept
but it's better when you're surfacing corrections for things that are established. It's not equipped. They're not journalists. They're researchers maybe or commentators, right, who are clarifying a fact or a connotation, you know, maybe something a politician says. It's great for slow-moving stuff like that. There is nobody sitting on their computer typing up a community note who is on the ground in Gaza or on the ground in Israel who has any idea of what the actual facts are. So you're winding up with community notes, right?
quote unquote, checks that have also been shown to be wrong several hours later. And that, again, you know, is not a it's not a dig on citizen journalism. It's that that's not citizen journalism. That's the whole point, right? You should be servicing citizen journalism, but then it requires the actual effort of going and figuring out who the citizen journalists are in this particular case, who the channels are that that, you know, that that are authoritative and should be returned first in search. We'll be back in a minute.
I want to move to the platforms. Katie, let's start with you. You were 10 years in public policy at Facebook, which owns three of the platforms I'm going to mention. I'd like you to stack rank the major social media platform from best to worst in terms of content moderation and their ability and willingness to slow down the spread of misinformation and what tools they have available.
That would be TikTok, Facebook, Threads, Instagram. I guess I'll just mash them together. Twitter, unless you want to put them apart. Twitter, Reddit, and YouTube. Rank them and explain your reasoning. Yeah, and I think it's a little hard to, like, it's a little apples to oranges in each of the cases because each of the platforms, how it appears and stuff is a little different to them.
I will say, at least in terms of what's been announced, so I know I've seen announcements from Facebook, from Twitter, from TikTok. I've not seen detailed explanations from YouTube yet or any of the others on what they're doing.
Facebook has a lot of nuanced tools that they're putting out that I'm not seeing from others. They've got ways for people to lock down their profiles. They're making it so that people can only have comments from people that follow them or their friends. It looks like they're trying to give people more tools in terms of trying to do this. Whereas...
TikTok also put out a myriad of things that they're doing, but it's not necessarily as detailed. It's a lot of very focused. We have these policies. We have teams that are doing it. The question that we don't know is how well they are at executing on those policies and having those tools and stuff to do that. X slash Twitter is just in a totally different world, universe, right?
It's where we see stuff spreading the most after Telegram, right? And we're not talking enough about Telegram. Yeah, I'm going to ask about that in a second, but go ahead. Yeah, I was just going to say, I know the instinct is to talk about the platforms that we know, but a lot of these efforts are moving to those other ones where they have very lack content moderation. Like Telegram would be very last...
X is maybe a little bit on top of that. And I just think all of them are having a big challenge to what we've been talking about of trying to figure out what to do while people are still trying to verify this information. And they're being asked to surface authoritative sources, but authoritative sources are getting it wrong. Who do we pick? What is a good eyewitness on the ground account? What is not? Should we take more stuff down to be on the safe side, but then we run the risk of actually suppressing the...
legitimate speech. And I think that it's just by virtue of having to do this a lot more, your Facebooks and your YouTubes are more equipped for it. X doesn't have the staff, nor do I think they have the leadership that's ever gone through this before to really understand these nuanced questions. And then you just have the platforms like Telegram that just don't care.
Renee? I think I would agree with most of what Katie said. I think you see a lot of commentary on threads, for example, of people just saying, I just want some facts. I feel like Twitter is a firehose of bullshit. I just want to kind of understand what's happening. The journalists are here now. I'm just going to kind of assume, you know, I was sort of joking around about it, but it's really true that...
I actually just started going to news sites. It's like, okay, what does WallStreetJournal.com have? What does The New York Times.com have? And then Triangulate. Right. Go to Al Jazeera.com. What are the different sites? Who's reporting this from different angles? And where are the actual journalists on the ground? Again, I don't want to see commentators who paid for checks.
Whereas on threads, it was much more of like a much cleaner feed. And so it didn't have that kind of stuff. So just in terms of curation, I think curation is different than moderation. But in terms of what's being surfaced, it's a...
better experience. Telegram doesn't moderate. It's a different kind of content spread, right? Things will hop from channel to channel. And so you'll see, you know, when we look at shares of content on Telegram, we will actually look at who's sharing content from what other channels to try to identify new and emerging channels. Because, you know, sometimes you get a channel that's created and it says something very salacious that you've never seen anywhere before. And
all of a sudden, like it gets, you know, their content is disseminated. We saw this constantly in Russia, Ukraine, right? Some random channel would make an allegation. It would get pushed into mainstream ones. Then they in turn would get a bunch of new followers. And then, you know, that would be how they would kind of amass an audience through that. Cheyenne, what do you think? Yeah, I think Telegram, I would say, is probably the most important news gathering platform for us and has been for quite a while, you know.
from the war in Ukraine, you know, Telegram is one of the most popular apps in both countries. And also with the conflict in Israel and Gaza at the moment, also in both countries, Telegram is quite particularly, you know, Hamas is obviously banned from major platforms because it's a proscribed organization in most Western countries. So most of its content
comes from Telegram. So they post on Telegram and then it travels from Telegram to other platforms. I also have a particular interest in Telegram because I also cover conspiracy theories and people who start conspiratorial narratives. Most of them are on Telegram because many of them during COVID and also after the 2020 election in the U.S.,
were either suspended or their reach was limited on major platforms, so they went on Telegram. So that is the most important platform for me. I would say there's not much moderation on Telegram. And it's not because they sort of set themselves up as some sort of a free speech platform.
defending platform it's just i don't think they care that much to be perfectly honest i think it has to be like really egregious content like it has to be proper like you know terrorism content we're talking about like actual neo-nazi people or we're talking about you know islamic state or um you know like really really horrific stuff like you know um child sexual abuse images
If it's that type of content, yeah, they will take it down. And they have to, by the way. They have to because some of it is obviously illegal. But when it comes to other stuff, you know, you can post whatever you like on Telegram. Right. All right, Katie, speaking of which, Terry Britton, the EU commissioner in charge of enforcing the newly passed DSA, has started an investigation into TwitterX and sent warnings to other social media companies. Elon Musk has threatened to pull X out of Europe, for example.
They warned Twitter last December. Again, they've warned them a lot of threatening fines. Yeah, I think... Well, first, this is... I think Brighton seeing this as his first chance to, like, really enact the DSA, right? And I feel like he's taking...
opportunity to be like, I'm going to show that I'm going to take this seriously and I'm going to hold these platforms accountable and make them do this. I think there's a lot of concern. Civil society organizations sent him a letter about abusing this power and not being very specific in terms of what they're doing and the chilling effect that this can have on the platforms in terms of
speech and content that is on there. Like this was not meant to be his bully pulpit. And we wanted to have a more thoughtful process in terms of doing this. And I don't know if these letters are the right way to do it, but we'll have to wait and see how this unfolds. But the other thing is that like Telegram's not a part of this at all because they're not registered as a VLAP.
So we're sitting here saying Telegram is one of the most important platforms of where this content is spreading, but I don't know of anybody that has any regulatory power over Telegram to actually investigate them. So they're going after Twitter, but not this one, right? Does it matter? Correct. And I think
and it's right to be asking Twitter these questions and I do think like, you know, Threads is not in Europe yet either and so it's going to be interesting to see as these government regulators and these tech CEOs start to stare each other down of being like, you know, we saw it too in Canada with Meta News. Yeah, and they just pulled out. They pulled out and so I think this is
I think this is the EU trying to show that they are going to take the DSA seriously. They are going to enforce it. Now it's going to be, how do the companies respond? And we're just going to see this back and forth kind of happening while we're also trying to dig into what is actually true or not and what's happening with the content we're seeing. Cheyenne, how do you look at it? Because as we were noting before, a lot of this stuff gets monetized and it's good for these companies.
And at the same time, these regulators are trying to get their arms around it. In Europe, more aggressively, U.S., not even slightly. Does it matter to what you're doing to stop disinformation or does it not matter?
Look, two things. First of all, I would say, you know, I'm a journalist and as a journalist, I value immensely the right to free speech and expression. You know, I rely on freedom to be able to report. You know, I've reported in countries where you don't have that right and it's not fun. So I would like to say, you know, I don't want people to get censored or their views to be blocked. I want everybody to have the right to express themselves freely. Then I'm not also, as a BBC journalist, I don't think it's for me to, you know, tell...
what sort of policies they should come up with or what the European Union should say to these platforms. All I would say is these platforms have certain terms of service that is clear. And as long as they stick to those,
you know that as long as they can stick and have the ability to stick to those things would be much better than they are now the problem is because of the volume of content that is posted on those platforms particularly when something like this happens when there's there's there's just such a torrent of content being posted and so much interest they just cannot physically keep up with it and also the tools that they've particularly the automated tools that they've designed to be able to deal with this type of content cannot keep up now the problem is
When you design policies specifically, and also when you design your algorithms in a way specifically to reward content that is outrageous, to reward content that is shocking, to reward content that is posted for engagement, then obviously this is going to happen. But
You know, the platforms are not going to change their algorithms. The platforms are not going to change any of that because that's how they make money. So the best thing we can do is for ourselves to, first of all, try not to amplify content that is unverified and not accurate. And that's most that we can do. And for somebody like me, I can just go on and try and find the most viral pieces of footage and tell people this is not true.
Renee? There's a lot of other policies on other platforms that are specific to monetization, where, again, this question of your right to be on a platform versus your right to make money spreading, you know, your right to post something that's bullshit versus your right to monetize the bullshit. Other platforms have differentiated between those two things. Well, and there's a scale issue, too, right? You can have the policy to say that you want to demonetize them, but then finding those people and making that decision in...
in a quick manner to make sure that is going to be difficult for any platform. Okay, we have a couple more questions. There's a couple more. Katie, social media users accused Facebook and Instagram of suppressing pro-Palestinian posts. Met executives said it was due to a bug on the platform related to reshares and posts.
and that it affected all users equally. Thousands of users are complaining about continued suppression. Again, this has gone on for a long time with all these social media sites about suppression and shadow banning, et cetera, et cetera, which you're familiar with. Any thoughts? What could be happening? I generally found that this is a mixture of things. There have been many times where it's legitimately a bug. But then there's also more systemic things that I think we're seeing as part of this because...
There's been complaints in the past about Facebook suppressing Palestinian voices. There was a human rights assessment that was done that showed that less resources were put into machine learning classifiers in Hebrew versus Arabic. There were mistakes in terms of what dialects that they were using, how much resources they had in terms of content moderators and human capital on this.
that I think have also played a part in this. And so it's usually when these things are happening, there's a lot of different parts that are playing a role in determining the contours of these problems that the platforms have.
Some of it's the choices they made on resources and where they're putting it. Sometimes it is just a bug has happened and it's just a coincidence of what happened here. It's hard to know beyond what Meta actually said because, again, we don't have a ton of transparency to be able to verify what they are or are not saying what's happening. But I've usually found it's not any one thing. It's usually a combination of a whole bunch of things that contributes to something like this. Renee, on top, speaking of which, actual access to these API from the platform's
Um, on top of layoffs, you're getting limited access, as you mentioned. And the fact, uh, that Meta has let its social monitoring tool crowd tangle fall apart. Um, it's harder for researchers to track misinformation online in the United States because of attacks from conservatives, including free speech lawsuits and efforts to block any information sharing between the government and the platforms. Explain the chilling impact of how this plays out. Now I know you and Alex Thomas are personally named in lawsuits, uh,
I'm not sure how much you can tell me, but talk about the chilling issues here. Well, I can't comment on the pending litigation. So, you know, that is a little bit of a chilling effect by itself. There's two different things there. The first is just the technological access and prioritization, right? So Facebook is building a new researcher API, right?
It's taking feedback. TikTok is also building a researcher API and taking feedback. So some platforms are still doing things voluntarily. This is where I think the DSA-- this is the part of the DSA that I like, actually. The Researcher Data Access piece is the part that I've been supportive of.
In that regard, that basically says that since these tools are so powerful, since they have such social impact, the ability to understand what is happening on them should be made available to researchers, to qualified civil society and journalistic organizations.
And by qualified, that means like you can do you can take certain steps to protect privacy. You have certain capacity to analyze data. You're not just going to ask for something and then go dump it on the Internet somewhere. And so that piece, I think, is actually a good regulation. And I really wish that we would have the Platform Accountability and Transparency Act passed here similarly. Yes. As far as they don't have a speaker of the house, so they can't do that. But go ahead.
True, true, right? I mean, well, this is where I'm like, yeah, sure, we'll regulate someday when we have a functioning government. But they can't order lunch right now. So it's very hard for these people. But the other piece, right, which is that information sharing piece, you know, we hit a point where misleading, you know, nonsense, candidly,
made by, you know, rent-a-quote fake think tanks were laundered through conservative media to create an impression of some sort of vast collusion operation between academics and government or government and platforms. There were certain times when governments engaged with platforms in ways that I think were counterproductive, right? That job-owning line. Were they trying to coerce them to take down content? That's where I think another, again, transparency law to have government requests be logged
is a very, very useful and necessary thing. If they ever get a Speaker of the House, they can work on that too. But the...
Government does need the ability to communicate with tech platforms. And what we see a lot of the time is tech platforms reaching out in the other direction too. You saw during COVID, Facebook reaching out to the CDC saying proactively, "Hey, we see this rumor going by. What should we know about it? We want to surface authoritative information." And again, the same thing in conflict zones, you want them to be able to be in touch with governments in the region.
We parse so much of this through the stupid American culture war and polarized domestic politics. But the chilling effect impacts their ability to set policy and engage internationally. And that's one of the things that's happening here, right? It creates norms and it creates a fear that they're going to engage in some way and then they're going to get dinged for it in a regulatory sense.
in some country or another, and that's actually very bad. You do want that channel of communication open, particularly where terrorism is concerned or where atrocities or violent events are concerned, because you might remember back in the olden days of 2015 when this was happening with ISIS, those communication channels were not really there
And it wasn't great. Yeah, absolutely. The importance of research is really critical to these companies. The ability to talk among and between themselves. They make small, stupid mistakes. That's very different than creating this ridiculous conspiracy theory. And it is nonsense. You cannot say that
but it is. It's absolute nonsense. And we are all concerned about overreach of government, but in this case, it's being used for another reason altogether. That's my opinion, so I'll just say that. Okay, one last question very quickly for each of you. I'd love to know what sources of information you think are most credible right now and why. What's your media diet? Cheyenne, let's start with you.
Well, obviously, I work for BBC News, so I consume quite a lot of BBC content. But, you know, I would say there are plenty of diligent, hardworking, dedicated journalists on the ground. And, you know, when a conflict like this happens, and, you know, saw the same thing with the conflict in Ukraine before that,
the war in Syria, there are dedicated journalists who are not actually employed by major news organizations and do not have access to all sorts of support that journalists who work for major news organizations have access to, who risk their lives with basically limited support and just go to war zones and try to report accurately without any sense of partisanship or any bias. You know, I would say for any
Apart from like major sources of news that try to actually at least verify information and not be partisan in one direction or the other direction. Just trust journalists or try to trust journalists who are underground reporting, underground risking their lives. You know, they're doing a huge service to... And is there any non-journalist organization that is journalist adjacent that you like to use?
Well, I think the ones that I... I don't want to name names. The ones that I follow closely, they all work independently. They do it in their own time, and then they sell their content to major news organizations. But basically, they're doing it of their own volition. They haven't been asked by anybody to go, and they haven't got support. They just do it themselves, and then whatever piece of material they get, they just try to sell it. Yeah, not Uncle Harvey at his house or in a bar in...
wherever, although those exist everywhere. Katie? So I would say hard news. I've got CNN on. I mean, I think they just do a great job in terms of crisis situations like this of hard news reporting, New York Times, Post. For analysts of what's happening online, the Atlantic Council's Digital Forensics Research Lab does a fantastic job of doing all of this. And so I'm a non-resident fellow there. And so
I've been talking to a lot of them, following what they're putting out. And then I do get a lot... Threads has been where I've been reconstituting my journalism and my journalist feeds and kind of getting some of those stories and stuff from there and seeing that community rebuild has been a really interesting thing. And then I do just get a lot of my news through newsletters, whether they're sub-stack newsletters, those coming from news organizations, because it's just hard for me to curate. So being able to get that through my inbox to find those stories, whether it's analysts...
or hard news is sort of like what my go-to day-to-day is. Okay. And Renee, let's finish with you. Bellingcat, one of the, I think the most useful. I saw some, you know, you always have to have the meta-narrative about is media collapsing? Did disinformation researchers know nothing? Of course, that's been all over X. And one of the things that I thought was funny was like seeing that thread and then
literally like there's Bellingcat's post like next on down the list. And I'm like, oh, you know, you could actually just follow these guys. Yes, media has been dying for centuries. There it is. Media has been dying for centuries.
So I do, you know, I am in the Telegram channels, right? I mean, I follow very closely all of the stuff as it comes out. I think for me, it's just a matter of take it, parse it, but like let it sit for a couple of days. The world doesn't need my commentary on, you know, like it doesn't need my outrage. And I think that that ability to, you know, let something emerge, the facts may change.
uh is the kind of critical life skill that um that we need in this in this new social media mediated environment it's true renee you're never going to be a tech bro then because you have to have an opinion about everything um in any case i really appreciate it thank you thank you
What is your media diet? I have a very good media diet. I read all the major outlets, you know, including cable and across the world. I look at Twitter largely on quick things I know they do well, like what's going on in Congress right now. They're the best at it. And I read everything. I use Artifact.
to find stories that I might not have been aware of. And I mostly rely on friends who have recommendations wherever I find them. I obviously read a lot of authoritative sources. I also think a good add to the media diet is to read international news, not just in wartime, to experience and understand the world, not just in these extremely hot moments, but to have a view of a situation of politics on the ground, how they affect everything that plays out in these heated moments. Yep, absolutely. It's important, but most people aren't, let's just say.
Most people don't read anything. They read the cereal box and then go to Facebook to yell at their uncle. That's really what they do. Most of our listeners are probably reading stuff. I think a lot of people, I think the United States has a terrible media diet. I have a friend, Walt Mossberg, who's working on the media literacy project. I think it's the single most important thing we can do around media, which is get people literate in it.
And now it's more important than ever because you have to add the online element. Well, I thought one of the most interesting things to come out of that conversation was Renee's point that this might actually lead to the revival of homepages going to NewYorkTimes.com or WallStreetJournal.com. And this irony that somehow in the great mess of Twitter and X, Elon has helped resuscitate the media business, the legacy media business.
Oh, I don't know. I think Twitter gets a lot more attention than it deserves in that arena. I've run media sites and Twitter is never the highest, never, ever. It's one of the lowest, actually. We used to get more referrals from Instagram, a lot from LinkedIn, original Google, you know. And so I don't know if it's Twitter. I just think a lot of journalists are on it. And so maybe across the world, that's a little different. And I suspect it is. But I think Facebook continues to be the most important purveyor of information. I think all the studies bear me out on that.
But Twitter X now is definitely worse than before, right? Oh, terrible. The verification. It's very hard. I mean, there are so many journalists on there and it's historically been, I mean, look at the Arab Spring, historically been a way that you could see what's happening on the ground, that you could understand and you could curate your feed, you know, through verification to some extent. Yeah.
And now I think a lot of what they were saying is that now mainstream media dances to the tune of social media. And that's a problem because social media is really bad at their job. And so they should just stick to be doing the reporting. And and that's it, because you cannot rely on the stuff you can some of it, but not much of it, unfortunately.
There's also all these independent groups verifying things. Yeah, the growth of Bellingcat and the other things. Those are great. Those are fantastic. That's different than these things. Those are not online publications per se. They're just really good at journalism. Yes, they are. Do you think that, by the way, here's a question. Do you think that journalists right now, given the mess that is Twitter and Axe, and part of it being what this panel is describing, the algorithm preferring the random comments of somebody who paid eight bucks to Elon,
mean that media companies should be paying right now? Should media companies be paying to get through the noise? No. Or journalists? Absolutely not. Look, NPR came off and they're like, yeah, it didn't make a difference. They just took the verification badge off New York Times. It doesn't matter. It just doesn't. This is... That was, it doesn't matter for the sales of... Or for ads, for getting people onto their...
their shows, but it might matter for shaping conversation. I don't think it does. I just think it's overblown because journalists are there and Elon Musk makes a spectacle of himself. But in the real matters, I do think broad social media... Mm-hmm.
Absolutely has an impact. Broad social media does, 100%. Not Twitter necessarily, but broad social media. And not just broad social media, but niche social media. Telegram, WhatsApp. Telegram is more influential. WhatsApp is more influential. All these different sites in India, there's tons of them, are more influential. So I would say, yes, broad social media can have an impact, including creating riots, real riots.
due to misinformation that people see online. Or even just information. Yeah, it's through their phones. But the most valuable moment for me of that conversation was really when Cheyenne described the process for verification. 100%. How he does his work, really. And I think that...
Many of us, we do that work when we see a piece of information, but we cannot expect that everybody on the internet is doing that work all the time before sharing. They aren't at all. They aren't at all. I spent a lot of time many years ago trying to prove Hillary Clinton wasn't a lizard. It was just one of these things. Did you succeed, Kara?
You know, I did. I ran down. I went to the person who was putting it up and I ran down every one of their sources and it was all wrong. Even coming back to them, they were like, I don't believe that. And it was incredible. It's just pointless. It was ultimately, it was pointless. But a lot of the internet is a Roshart test, right? It's like people see what they want to see out of it, out of the fog. Yeah. Well, I think the most important thing is when you're trying to deal with these problems, again, something Deborah has talked about,
You don't talk about the opinions and facts. You talk about personal experiences with them and you can start to get to the truth through that because you remove the anger part or this fact, that fact. And unfortunately, facts have been weaponized and opinions are, you know, there's an expression, every asshole has one, you know, pretty much. So that's the problem. So three predictions. Is this moment going to change how platforms moderate? No. No, not at all.
I don't think they care. Then second, do you think the DSA, the Europeans, will have any teeth? No, they will a little, but, you know, they got to collect. And do you think this moment or the proximity of the election is going to change the way that the U.S. government looks at or U.S. stakeholders look at regulation of tech companies? I don't know. We'll see. Look how they're doing right now. They're doing terribly. They can't decide anything. They can't decide on lunch, these people. And so, no, no, they're inadequate to the task.
So no, sorry. It's always so pessimistic. I'm not pessimistic. I'm realistic. You're realistic. It's the way things are. They don't care. They don't care enough. And at some point, they'll be made to care. But an insurrection in the United States Capitol wasn't enough for them. So I don't know what to say. All right. Well, they can't decide on lunch, but we will go have lunch. Kara, do you want to read us out, please? Yep. Today's show was produced by Naima Raza, Christian Castro-Rossell, Megan Burney.
and Claire Tai. Special thanks to Kate Gallagher. Our engineers are Fernando Arruda and Rick Kwan, and our theme music is by Trackademics. If you're already following the show, clearer days ahead. If not, you're stuck in the fog. I know I am right now. Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network, and us. We'll be back on Thursday with more.