Home
cover of episode A Looming TikTok Ban + A Royal Photoshop Mystery + Your Car is Snitching

A Looming TikTok Ban + A Royal Photoshop Mystery + Your Car is Snitching

2024/3/15
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.

I had a big moment over the weekend. What was that? I wore my Apple Vision Pro on an airplane for the first time. Oh, my God. Well, you don't appear to have any visible injuries on you. It doesn't look like you were assaulted. So how did that go? Well, it went great. So I pull it out. I put it on. I connect to the Wi-Fi just like Joanna Stern told us last week it was possible to do. And I watch Suits in

In like IMAX, you know, full size, like the window that I am watching Suits in is as big as the freaking plane. Meghan Markle has never been larger. Exactly. But I ran into a dilemma, which was, you know how you can turn the dial for immersion? Yes. So you can either turn it so that you can see everything around you. Yeah. And you just have this kind of floating window with Suits playing. Right.

or you can turn the dial all the way to the other side, in which case you don't see anything around you, and you're just, you can pick your background. I was on the surface of Mars, so. But what's the case for like wanting to see what's around you? So here we go. So I immerse myself. I'm on the surface of Mars. I'm watching Suits. It works great, except I missed the drink cart.

I don't see the drink cart coming by and I miss my drink. That's so great because the flight attendant could have just tapped you on the shoulder and said, do you want a drink? But instead they made the right choice and said, screw this guy. He wants to look like that. He can get his own damn drink. So this is the dilemma of flying in the Vision Pro I have learned. No, this is not actually a dilemma. If you want a drink, you need to engage with reality, my friend. The choice is yours.

I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey New from Platformer. And this is Hard Fork. This week, a bill that could ban TikTok passes the House of Representatives. What's next? Then, we've got to ask, does Kate Middleton actually know how to Photoshop? And finally, the New York Times' Kashmir Hill joins us with an investigation into how your car may be spying on you. Rev it up. Personally, I'd pull over.

All right, Casey, this week we have to talk about what is happening with TikTok because it has been a very big week for that app and I would say for social media in general. Yeah, there have been a lot of moves over the years to maybe ban TikTok, but what we have seen this week is the most serious of those moves that we've seen so far. So on Wednesday of this week, the U.S. House of Representatives voted to pass something called the Protecting Americans from Foreign Adversary Controlled Applications Act or PAPACA. PAPACA!

So this bill passed the House of Representatives on Wednesday with a vote of 352 to 65. So pretty overwhelming bipartisan support for this bill. It's a bill that would essentially require ByteDance, the Chinese conglomerate that owns TikTok, to sell it. So Casey, let's talk about what this bill means and how it got here and what the implications are. But first, let's just

Say, like, what's in the bill? So two things. One, if it gets signed into law, it requires ByteDance to sell TikTok within 180 days. And two, if ByteDance chooses not to do that, the bill creates a process that would ultimately let the president decide whether the app should be banned on national security grounds.

Right. So basically, if you have TikTok on your phone and this bill passes 180 days from when it passes, your app will not be able to get updates anymore and it won't be available in the app stores. So let's just remind each other how we got here, because this is not a new topic. As you remember, Donald Trump, when he was the president, tried to ban TikTok.

TikTok or force a sale of TikTok. That came close to happening, but then sort of fell apart in the late stages of that process. There have been other attempts to ban TikTok. Montana actually passed a law banning TikTok within the state that was overturned by a court. So this has been a long process, and a lot of different organizations and lobbying groups have been pushing for a TikTok ban for years.

But why do you think this is coming to a head now? Well, in a way, there have been a series of events that brought us to this moment. Over the past year, TikTok was banned on federal devices. A number of states moved to say that, hey, if you are a state employee, we're going to sort of take this thing off of your phone.

Behind the scenes, ByteDance was having a bunch of conversations. They tried to implement this program called Project Texas, which would try to silo Americans' data and create a bunch of assurances, essentially, that TikTok could not be used for evil here in the United States.

And all of those things just fell on deaf ears. And I think as we've begun to approach our election here in the United States, Kevin, lawmakers are increasingly concerned about what's happening. And one thing we learned is that the Biden administration, which really wants to ban TikTok, gave a classified briefing to members of Congress recently in which they made the case. We don't know exactly what was said at this briefing, but whatever it is seemed to really motivate a lot of members of Congress to get this thing out of the app store. Yeah.

Yeah, I think that's a good point. The Project Texas thing that we've talked about on the show before was not successful in terms of convincing Americans and American lawmakers that TikTok was no longer a potential threat to national security. I also remember you went down to a TikTok transparency center, which they were sort of giving tours of to various reporters and lawmakers and skeptics.

to try to say, look, we are being transparent. We are letting people see our algorithms so they can see there's no sort of nefarious Chinese sort of plot to seed propaganda in there. That ultimately did not appear to be convincing to that many people either. Yeah, I mean, it put TikTok in this position of having to continuously prove a negative, which is that it had never been used to do anything bad and never would. And it's just really hard to do that.

Yeah, and I think for a lot of people, including me, the assumption was that there would be this tension over TikTok, but that ultimately nothing would happen because

But now it appears there is this real bipartisan effort that may actually succeed. And the question of why is this happening now is really interesting. And I think it has a lot to do, frankly, with something that we haven't talked about much on this show, which is the conflict in Israel and Gaza, which has brought new attention to TikTok, in part because there's a sort of coalition of people in Washington who believe that TikTok is being used to turn American public opinion against Israel. And I think it's a really interesting

And there has been some viral analysis that showed that pro-Palestinian videos on TikTok were dramatically outperforming pro-Israel videos. And the Wall Street Journal reports that this was a big issue that caught the attention of lawmakers on both sides of the aisle who said, this app is a problem. It is basically helping to brainwash American youth into not supporting Israel, which I

I think is dubious for all kinds of reasons, but that does appear to have been a factor here. Yeah. You know, around this time, there was a tool that advertisers could use to track the performance of various hashtags. And some researchers used it to see that videos with pro-Palestine hashtags appeared at times to be getting more than 60 times more views than videos with pro-Israel hashtags. We don't know why, and there's not really any evidence that TikTok was

putting its thumb on the scale. But that research really seems like it scandalized Congress and once again drew attention to the fact that if someone at TikTok or ByteDance or the Chinese Communist Party did want to put their thumb on the scale, there was absolutely nothing to stop them. And this has been the core problem that TikTok has had from the start, which is that even if it does nothing wrong, there is always the potential that the Chinese Communist Party could force them to. Right. So let's talk about the process from here. So the first thing that

is needed to turn this bill, Pafaka, into law is that it needs to pass the House. That has happened. Now, the next step is it needs to be introduced to and passed by the Senate. Do you think that's likely to happen too? Well, actually, maybe not.

There's been some reporting in the Washington Post this week that Senator Rand Paul has come out and said that he does not intend to support this bill. He thinks that Americans should be free to use whatever social media apps they want to, and he just does not see the need for this app to be banned. I would also say that Chuck Schumer, who is the Senate majority leader and a Democrat—

Seems pretty wobbly on this one as well. He has not committed to bringing this thing to a floor vote. And I believe, you know, talk to lobbyists on Capitol Hill and they'll tell you that Chuck Schumer is a big reason that a lot of tech bills don't get passed because he just sort of doesn't believe that they need to be regulated much at all. So because of those two reasons and the fact that there is still no companion bill in the Senate,

Yeah, Kevin, I think this one does have long odds ahead of it. But what do you think? I think it could pass the Senate. I think, you know, I've been surprised at the sort of bipartisan support. We've seen a few lawmakers come out vocally in defense of TikTok or at least in opposition to forcing it to be sold. But the majority of Congress people have signaled that they would support this. If it is passed by the Senate, it would then move to President Biden, who would need to sign it. The White House has indicated that he would. And

From there, then the bill would need to sort of survive legal challenges, which ByteDance has signaled it will mount. If this bill is passed, they will try to stop this in court. So the bill would need to overcome that challenge.

But if it does, say all of that happens and this bill is passed and holds up in court, it would give ByteDance about six months to undertake a sale of a massive tech product that it really doesn't want to sell. Yeah. And when you talk to the folks at ByteDance, they will say, make no mistake, this is not about a sale. This is a de facto ban of TikTok in the United States.

And I believe the reason that they are saying that is that the Chinese government has given indications that it will not allow ByteDance to divest TikTok. And so ByteDance will effectively have no choice but to stop offering the app in the United States. Yeah, so TikTok has obviously not taken this news of this bill lying down. They have mounted an aggressive lobbying campaign in Washington. They have a huge lobbying team there that is presiding

presumably, you know, fanning out over Capitol Hill, trying to convince lawmakers to drop their support for this bill. They also have started mobilizing users. So this week, TikTok sent a push notification to many, many of its United States users that urged them to call their representatives and called this law a, quote, total ban on TikTok, which is not totally true. It doesn't ban TikTok. It just forces ByteDance to sell it.

But the company wrote in this notification that this bill would, quote, damage millions of businesses, destroy the livelihoods of creators across the country, and deny artists an audience. Did you get this notification? I did not because I do not enable TikTok to send push notifications because like every other app, it sends too many.

Right. So this was shown to users who opened their apps, and it did apparently result in a flood of calls to congressional representatives and their offices. One congressman, Florida's Neil Dunn, told the BBC that his office had received more than 900 calls from TikTokers.

many of which were vulnerable school-aged children and some of whose extreme rhetoric had to be flagged for security reasons. Which I don't understand. Were they, like, threatening the congressperson if TikTok were banned? I'm going to assume that, yes, they were absolutely threatening the congressperson. So a flood of kids contacting their representatives to complain about this bill was...

What do you think about this strategy? Well, we have seen it used effectively before. Uber would do this in cities that were threatening to ban Uber. They would sort of show information in the app saying, hey, why don't you call your representative here? We've seen Facebook and Google put banners in the app talking about issues of concern to them, net neutrality and other things. And this is...

always been pretty non-controversial, actually. And it is demonstrated that these apps do have really dedicated user base, so they want to keep using these apps. And so I was laughing this week, Kevin, when Congress was just so outraged at the fact that some of their constituents called them to express an opinion about a bill that was before them. Right. I do think it was an interesting strategy in part because one of the charges that

TikTok and ByteDance are trying to dodge is this idea that they could be used as like, you know, sort of secret tools of political influence. And one of the ways that they are responding to this is by becoming a non-secret tool of political influence, like literally trying to like influence the political process in the United States with these push notifications. But that aside, I do think this is a playbook we've seen before from other companies that are being challenged by new regulation. But like, I just...

want to say again is is the message from congress that they don't want to hear from their constituents about this is it like only call us if you're a registered lobbyist like is that what they're telling us because that kind of sucks yeah it kind of sucks but it all it also is the case that these like these these congress these congressional offices are not set up to handle the volume of of incoming calls i don't know you you try getting 900 calls from angry teens what how you like it why else do the

phones exist in the offices of Congress if not just illicit constituent feedback. It's like, oh, what, like your DoorDash order is at the front door? Is that the message that wasn't getting through? I don't understand this. I don't know. I think they should, they should be able to, you should be able to text your congressperson because, you know, they text us so often when they're fundraising or trying to get elected. I think it would be, it would be turnabout as fair play. I agree with that. So another wrinkle in this TikTok story is what has been happening with Donald Trump because Donald Trump, obviously no fan of TikToks,

of TikTok.

under his administration, tried to force the app to be sold, much like the Biden administration has done this time around. But he's flip-flopped. He did a TikTok flip-flop. And he now says that he does not support banning TikTok in the U.S. He told CNBC on Monday that, quote, there's a lot of good and a lot of bad with TikTok. And he also said that if TikTok were banned, it would make Facebook bigger and that he considers Facebook to be an enemy of the people.

So, Casey, why is Donald Trump now changing his tune on TikTok? Well, look, you know, you're always on shaky ground when you try to project yourself into the mind of Donald Trump. But we know a couple of things. One is that there is this billionaire with the incredible name of Jeff Yass.

Yas queen. Jeff Yas is a very rich person and big donor who recently befriended Trump. And Yas's company owns a 15% stake in ByteDance.

And he, we believe, has been lobbying Trump behind the scenes. And the thought there, the suspicion there is that there is some sort of quid pro quos. Like, hey, you leave TikTok alone. I'm going to be a big backer to your campaign at a time when you desperately need money. Yeah. So Donald Trump has officially gotten yassified.

This is a story about the yassification of Donald Trump. So another factor here is that a lot of people in Donald Trump's camp, including Kellyanne Conway, his former advisor, have been lobbying him in TikTok's defense.

The Washington Post also reported recently that part of Donald Trump's antipathy toward Facebook in particular has been fueled by watching a documentary about how Mark Zuckerberg's donations to sort of election integrity causes in 2021.

helped fuel his defeat. According to the Washington Post, Donald Trump watched this documentary about Mark Zuckerberg's political donations, got very mad about it, and has ever since been sort of opposed to Facebook on any grounds. Obviously, banning TikTok in the U.S., one of the main beneficiaries of that would be Facebook and Meta because they have

competing short form video apps like Instagram Reels. Yeah, and we should say the money Zuckerberg donated was to support basic election infrastructure so that people could vote. These were not partisan donations. This was donations to like local elected offices to make sure that the election could just sort of run smoothly. And because the Republicans lost it, it's infuriated them ever since. So this is a huge talking point on the right

is that Mark Zuckerberg is an enemy of the people because he supported people being able to vote. So just want to say that real quick. Now, what I will also say though, Kevin, is that Trump is right that one of the two primary beneficiaries of such a ban is meta. And we've spent a long time now in this country worrying that meta is too big and too powerful. And this would absolutely make meta bigger and more powerful. And the other one presumably is YouTube, right? Is Google and YouTube. YouTube already the most used app

among young people in the United States. And if you take away TikTok, you better believe that the average time they spend on YouTube is about to go up. So...

One question that I have for you that I don't know the answer to is, do we know if Meta and Google, which owns YouTube, are doing any kind of lobbying around this bill? I remember several years ago, there were stories about how Meta had hired a GOP lobbying firm called Targeted Victory, basically to try to convince lawmakers that TikTok was a unique danger to American teens.

What are TikTok's rivals doing this time around? So I don't have any specific knowledge of what they're doing in this case, but for the exact reason that you just mentioned, I do assume that their lobbyists are in the ears of lawmakers saying, hey, this is the time to get rid of this thing. This thing is dangerous. Meta is always scheming to eliminate its rivals whenever they can. This is a really juicy opportunity. Why else would you pay the lobbyists that they pay if you weren't telling them to go hard after this? Right. So let's talk about

the sort of core argument here that TikTok needs to be banned or sold because it is a threat to national security. And maybe a good way to do this would be for us just to outline, like, what is the best possible case for banning TikTok? And then we'll talk about the case against it. But like, let's like really try to steel man the worries that people have here.

What is the best possible argument you can imagine for banning TikTok? I would say a few things. One is essentially a fairness argument. China does not allow U.S. social networks to operate there, even though we allow their social networks to operate here. And I think that there is a question of essentially fair play, right? China gets this playground where if they wanted, they could push pro-China narratives using these big apps that they have built in the United States. But if they wanted to, they could push pro-China narratives using these big apps that they have built in the United States.

The United States does not have the same opportunity inside China. So that's one thing I would say. The second thing is that I think that the data privacy argument is real. We have had Emily Baker White on this show. ByteDance used data

about her TikTok account to surveil her and other journalists because they were worried about what she was reporting on about their company. So this question of could ByteDance use Americans' data against them, it's not abstract. It's already happened. The company's hands are not clean. How many Americans do you want that to happen to until you take action? So those are two things that I would say. What do you think? What are reasons why you might want to ban ByteDance?

Well, one argument is just that we already have laws in this country that restrict the foreign ownership of important media properties. Like a Chinese company would not automatically be allowed to buy CNN or Fox News tomorrow if they wanted to. They would have to basically go through an approval process with the FCC because our laws limit the foreign ownership of those kinds of broadcast networks.

Rupert Murdoch, in fact, basically had to become an American citizen before he could buy Fox News because that was the law on the books then and the law on the books now. So in some cases, it is strange that we would allow a Chinese company, a company owned by an adversary of the United States, to own a very important broadcast medium in the United States. We don't allow it on TV. Why would we allow it on smartphones? So that's one argument there. Another argument for banning TikTok is essentially that

the existence of an app that is so popular with Americans that it's, you know, controlled by an adversary of the U.S. is not that it already has engaged in kind of sneaky attempts to sway American public opinion, but just that it could. We've now seen just this week that when TikTok wants to, it can try to get, you know, a bunch of American young people to call their congresspeople. That is a political influence campaign, and it's one that TikTok itself was behind.

And you have to think, like, what could TikTok do in the upcoming election? What would it do in the case of a war between China and the U.S.? If it can mobilize American citizens to oppose a TikTok ban, consider what it could do if, for example, China invaded Taiwan. What it could do if there was a war between, you know, a Chinese-backed state and the United States or its allies.

There are so many ways that an app this powerful in the hands of an adversary could be a danger to U.S. interests. And so, you know, while I do think that some of the more extreme arguments for banning TikTok on national security grounds don't really register with me, for me, it's more of like, well, what could happen in the future? How could this thing be used in a way that resists American interests? Yeah.

Well, at the same time, Kevin, there are reasons why I think it would be bad to ban TikTok, and we should talk about those. Yeah, so what are the most convincing reasons not to ban TikTok or to oppose this bill? So one big reason is that you're not addressing the root of the problem here. We don't have...

data privacy laws in this country. If you're worried that your data might be misused by TikTok, I guarantee you there are a lot of other companies that are actively misusing your data and profiting from it. In fact, we're going to talk about that later in this very show, right? So this issue goes far beyond TikTok.

And I'm continually surprised that Americans aren't more upset about all the ways that their data is being misused today. And my worry is that when we ban TikTok, Congress will say, essentially wash their hands of the issue, even though Americans are going to continue to be harmed actively by things that, at least when it comes to TikTok, are still mostly theoretical. Yeah. The other argument against this bill that I've found compelling is one that

organizations like the ACLU and the Electronic Frontier Foundation have been making both of those groups oppose this bill, in part because they say that what's happening on TikTok is First Amendment protected speech, and that essentially by banning this app because you don't like what's being shown to people on it, you are not just punishing a foreign government, you are also punishing millions of Americans who are engaging in constitutionally protected speech on this app. And

And moreover, these organizations say, this just gives a blueprint and a playbook and a vote of support to any authoritarian government around the world who wants to censor their own citizens' speech on social media. If you are a dictator in some country and you don't like what people are sharing on an app, you can now point to this bill and say, look, the U.S. is banning social media apps because it deems them a threat to national security. We are going to ban an app

that we don't like as well. Yeah, and I think that that concern is particularly pointed given that it really does seem like a big factor motivating Congress here is that the content on TikTok is too pro-Palestinian for them. That really does seem to be one of the big reasons why this bill gains so much momentum so quickly is something about...

specific political speech. So I do think the courts will weigh in there. I think there's one other thing that is worth saying about why I think banning TikTok could be bad, which is that it takes the other biggest platforms in this country and it makes them bigger and richer and more powerful.

So Meta and Google and YouTube are the other platforms where all sorts of video is being uploaded every day. That is where more video is being consumed. Instagram had more downloads than TikTok last year. YouTube is the most used app among young people in the United States. And when you get rid of TikTok, an app that has 170 million users a month,

they are all just going to go spend more time on YouTube, more time on Instagram and other meta properties. So it's going to be hugely beneficial to those companies. And before the TikTok controversy came along, Kevin, people like you and me spent most of our time worrying that meta and Google were too rich and too powerful. So this is just something that worsens that problem even more. Totally. And we know that this is not hypothetical as a result because

TikTok has actually been banned before in India. It was banned in India in 2020. And we saw that what happened after the TikTok ban went into effect, there were some like little homegrown sort of Indian apps that popped up to sort of capture some of the audience. But the vast, vast majority of

of users just started using Instagram Reels and YouTube instead. Those companies got bigger in India because TikTok was banned. And I think it's fair to assume that the same thing would happen here. And for all kinds of reasons, you might not want that to happen if you're a regulator. Yeah. So look, we'll see what happens here. I do think that this bill still has a long road ahead of it. You know, again, we have never passed a tech

regulation in this country since the big tech lash began in 2017. So if this happens, it would truly be like unprecedented in the modern era. But at the same time, that House bill moved faster than basically, you know, anything we've seen during that time when it came to regulating big tech. And so this is something we should keep our eyes on. Right. So Casey, weighing all of these arguments for and against banning TikTok, like where do you come out of this? What is your preferred outcome here? Yeah.

I have to say, and it makes me uncomfortable to say, but I do sort of lean on the side of them banning it. Really? Yeah. You know, again, that fairness thing bothers me. The fact that we can't have U.S. social networks in China, but they can have social networks here. There's just like kind of an imbalance there. We have...

rules in this country around media ownership by foreign entities, which you just described for us. I don't understand why you would have those rules for broadcast networks and newspapers that arguably don't even matter anymore and not have them for the internet, where maybe the majority of political discourse takes place now. So this just feels like a moment where we need to update our threat models, update our understanding of how the media works and say, hey, it doesn't actually make sense.

for there to be something like this in the United States. And I say that knowing that if Congress follows through, we are going to get rid of a lot of protected political speech. We are going to make meta and YouTube bigger and more powerful in ways that make me totally uncomfortable. So I hate the options that I have here, but if you were to make me pick one, that's probably the one I would pick. How about you? Yeah, I mean,

I think my preferred outcome here would be that ByteDance sells TikTok to an American company, to Microsoft. Or remember when Oracle and Walmart were going to team up to bid on TikTok back during the Trump days? Something like that, I think, would actually assuage a lot of my fears about TikTok as a sort of covert propaganda app for the Chinese government, while at the same time allowing it to continue to exist.

If that doesn't happen, I think I'm with you. I think I am more and more persuaded that banning TikTok would be a good idea, in part because...

of the reaction that we've seen from ByteDance and TikTok just over the past few weeks as this bill has made its way through Congress. We have not seen them, you know, engaging in good faith. We've seen them sort of exaggerating, calling this a total ban. We've also seen pushback from ByteDance and presumably from the Chinese government too, which indicates to me that they do view TikTok as a strategic asset.

asset in the United States and that they do not want to give that up. So for all those reasons, you know, I am, I was skeptical of a TikTok ban and now I think I could get behind it. Well, it sounds like in the meantime, then if there are any TikToks you love, I might want to go ahead and save those to your camera roll. When we cut back, Palace Intrigue finds its way to Hard Fork from a literal palace.

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret.

Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

So today we have to talk about the biggest story on the internet this week, which is what is happening

With Kate Middleton, also known as the Princess of Wales. Specifically, what is happening with her photograph that she posted and the many questions it has raised about the fate of our shared reality? Yes. So let's just give a, if you've not been keeping up with this story, let's just give a basic timeline of what's been happening. And truly everyone has been keeping up with this story. So make it snappy, Roos. Okay. So basically, about two months ago on January 17th,

Kensington Palace released a statement notifying the public for the first time that Princess...

Catherine had gone into the hospital for planned abdominal surgery. And Princess Catherine is Kate Middleton because once you're a princess, you get a bunch of new names. Right. Technically, it's Her Royal Highness the Princess of Wales. Was admitted to the hospital yesterday for planned abdominal surgery. This statement comes as a surprise to people who watch The Royal Family. No one had said anything about her having abdominal surgery. She had great abs. Yeah.

Yes. And the royal family is pretty withholding about personal details, so people sort of roll with it. Then, a couple days later, we get the start of the conspiratorial talk. A Spanish journalist named Concha Caleja, who has sort of written a lot about the royal family over the years, is also something of a conspiracy theorist herself.

She wrote a book suggesting that Michael Jackson had been murdered to give you a sense of sort of where this person falls on the truth versus fiction spectrum. Exactly. So not exactly Walter Cronkite, but this report is widely talked about. She reports that Kate Middleton was actually admitted to the hospital several weeks before Kensington Pallad said she was and that she wasn't doing very well. About a week later, the same Spanish journalist suggests that actually the Princess of Wales is in a medically induced coma. Yeah.

Following this report, a spokesperson for Kensington Palace responds, basically saying this is all total nonsense. From what I understand, this is quite rare that a royal family spokesperson will comment on what are essentially Internet rumors. So the fact that even they denied it then maybe raised some suspicions.

Right. So then following this denial from Kensington Palace, there are a bunch of seemingly small things that just kind of like tip people more into the land of conspiracy theories. Prince William pulls out of a planned memorial service that he was going to go to at the last minute, claiming that it was a personal matter. Then a few weeks later on March 4th, paparazzi take some

grainy photos of the princess of wales with her mom uh driving in a car and people immediately start to think like this isn't kate middleton this is this is a body double this is you know people come up with all kinds of theories about why this is not actually the princess of wales this is someone pretending to be the princess of wales what has happened to the princess so the suggestion is this this was essentially staged for the benefit of the paparazzi exactly and then

Just a couple days ago, we got the biggest turn of events in a saga so far, which was that on Sunday, which is Mother's Day in the UK, which, side note, I didn't know that they had a different Mother's Day than we have here. Well, it's because they have a different word. They call them mums. That's true. So on Mums Day, Kensington Palace released a photo of Princess Catherine with her kids, and it was signed with a C, which is what Princess Catherine does with all of her social media posts. And...

And this photo was presumably intended to sort of dispel these rumors and say, look, here she is looking happy with all of her kids surrounding her. Instead, this totally backfires because people start pointing out that this photo has been pretty obviously manipulated. I mean, the forensic analysis that was immediately applied to this photo, I truly do not remember anything like it. And on one hand, yes, it's obvious that people are going to be pouring over this photo for any signs of strange things. But man, did people...

do this in a hurry. Yeah, it got the full Redditor treatment this photo did. People noticed that the kids' hands were sort of oddly positioned. There was some like, you know, clearly some editing done on one of the daughter's sleeves. Princess Catherine was not wearing any of her wedding rings and there was one window pane that looked kind of blurred. There was a zipper that was misaligned. And,

And following this uproar about this photo, the major photo wires that distribute photos to the news media from Kensington Palace issued what is known as a kill order. They killed it. They killed it. This is like the equivalent of the old in the old newspaper days when you realize you're about to make a mistake. And so you run down to the printing presses and you would say, stop.

the presses, right? This just does not happen all that often, Kevin, that we see one of these kill orders. Yes, so basically a kill order is something that Getty or the AP or another news agency can issue to people who might use their photos saying, do not use this photo anymore. In this case, these agencies said, it appears that this photo has been manipulated and so we do not think you should use it anymore. And

This rarely happens. Mia Sato, who's a reporter for The Verge, looked into this and reported that someone at a wire service told her they could count on one hand the number of kill orders they issue in an entire year. So this is a big deal. It is. So shortly after this kill order came out, Kensington Palace released another statement. This one supposedly from Kate as well, also signed with a C.

Quote, like many amateur photographers, I do occasionally experiment with editing. I wanted to express my apologies for any confusion the family photograph we shared yesterday caused. I hope everyone celebrating had a very happy Mother's Day. See.

So they also didn't release any other photos or give the unedited version of the manipulated photo. And so this statement, it did not do a good job of placating the critics who believe that something more is going on. No, this is a real raise more questions than it answers moment because, you know, if she wanted to, she might have said at least one or two things about how she edited the

the photo, right? Or like if there was any particular thing, it's just, oh, you know, like my daughter's sweater didn't look quite right. And so I want to see if I could fix that. You know, obviously won't make that mistake again. That was not what happened here. Right. This was not a simple case of sort of taking out some red eye or maybe using the blur tool to cover up like a zit on your face or something like that. Right. Or trying to smooth out your skin or make you look younger. Like, you know, you would edit one of your photos. Right.

So we should say, to sort of close the loop on the saga of the Princess of Wales, there are a lot of theories going around out there on social media about what has happened to the Princess of Wales. And what's the most irresponsible one? LAUGHTER

Well, the one that I've seen going around that I think is the funniest was someone actually compared the timeline of her disappearance with the production schedule for The Masked Singer and speculated that she's been hiding because she's on The Masked Singer. I don't think that is probably the real answer here. I wish that were true. But, you know,

It's none of our business where the Princess of Wales is. Well, you know, it is sort of a taxpayer-funded position over there, right? So, like, arguably there is some public interest in, you know, how is the Princess of Wales doing? Sure, but I would just say it's none of our business as the host of a technology podcast. Because we're American citizens. Exactly. Yeah. Exactly. And we fought a war to not have to care about the whereabouts of the royal family. I would say we fought a war to only care about them when it was interesting. Okay. You know what I mean? So...

you may be wondering, why are we talking about this? This is just like some spurious gossip about the Royal family. Is this really a tech story? And Casey, what is our answer to that? Well,

Well, look, you're right, Kevin, that generally speaking, when is the last time that a member of the royal family was seen in public is not typically something that we're interested in. But there were so many weird things about this photo that it actually did wind up squarely in our zone. Because what do we often talk about here? We talk about

media being manipulated. We talk about our shared sense of reality. How do we separate truth from fiction? And all of a sudden, a very frivolous story had raised, well, I would say, are actually some pretty important questions. Yeah. So the first thing that people surmised from this was that this may have been AI-based

manipulation in some way because it is 2024 and a lot of AI image manipulation is going on. And it's admittedly very funny to think that the palace was like, gosh, we need to put out a photo of Kate. And so just went into chat GPT and was like, show us the princess of Wales and her family smiling for a Mother's Day photo. Right. So it does not actually appear that this was due to AI. You know, obviously AI image generators have

well-documented problems. Sometimes they put extra fingers on your hand. Sometimes they make your eyes look weird. Sometimes they'll put extra hands on your fingers. Yes. But it seems pretty clear at this point that this was not AI. In fact, people have sort of been examining the metadata of this image and have concluded that it was shot in a Canon 5D Mark IV camera, and that it was edited on Photoshop for Mac. So this is like not

a generative AI scandal, it appears. But this actually is a really important piece of metadata, Kevin, because something that has happened over the past several years is that the question, what is a photograph, has gotten very complicated. Our friends over at The Verge cast talk about this a lot, because when you take a photo with your smartphone, it's taking many, many images at once, and then it is creating a composite out of them.

And so any image that you're seeing in your phone's camera roll these days, there are good chances that it's not actually what the camera saw. It is a bit of a generative AI experience that you're getting now with every single photo. So if the metadata had come back about the Kate Middleton photo saying this was shot on an iPhone 15, in some ways...

this would be a more complicated question. Yes, it's not just that people can now easily edit photos on their smartphones. It's that the actual cameras that are built into the smartphones often these days have AI manipulation built into them. So one example is the new Google Pixel phone has a feature called Best Take, where basically it takes a bunch of photos. You know, say you're posing for a photo with your family and

in one millisecond when one photo is taken, someone is blinking. And the next millisecond, someone else is blinking or someone's not smiling. You can essentially have it take a bunch of photos and pick out the best versions of each person's face and kind of smush that all into one composite image. And that all happens

without the user sort of having to do anything proactive. That's just like the basic camera on the phone does that. We also know that there's this whole field of what's called computational photography, which is basically building algorithms and AI into the way that cameras actually capture images. So for example, on the iPhone, if you use portrait mode,

That portrait mode is using AI to do things like segmentation, to say, this is part of the background. This should be blurry. This is part of the subject of the photo. That should be crisp and clear. And that is...

essentially a form of AI manipulation that is taking place inside the iPhone camera itself. Yeah, all of this is just to say that there actually is a lot of AI manipulation going on these days in every photo that you're taking with your iPhone. And of course, we think of this as generally benign because this is not inventing children that you don't have. It's not usually putting a smile on your face if there wasn't one there. Although,

If your eyes were closed, it will open your eyes for you, right? So I just think that's good to keep in mind as we move into this new era is that the images that we're seeing, these are not the Polaroids that we were taking in elementary school, my friend. Yeah. So I would say the biggest angle that got me interested in this story is just what it means for kind of the, what people are calling the post-truth landscape, right? We've had lots of people writing their takes on this this week, talking about how this is sort of the canary in the coal mine.

for this new era of sort of post-truth reality-making that we have entered into. Charlie Worzel had a good piece in The Atlantic this week where he writes, quote, For years, researchers and journalists have warned that deepfakes and generative AI tools may destroy any remaining shreds of shared reality. The royal portrait debacle illustrates that this era isn't forthcoming. We're living in it. So, Casey, do you think this

portends anything different about our social media landscape or the way that we sort of make or determine what's true in this new era? I think it's definitely a step down that road, but at the same time, I think that if the sort of worst comes to pass, we'll actually look back and we will be nostalgic for this moment, Kevin, because this was a case where we could just look at the photo with our own eyes and know with total certainty that the image had been doctored to the point that the palace had to come out

relatively soon afterwards and say, yeah, you caught us. Our expectation, I think, is that within a couple of years, the palace might be able to come up with a totally convincing image of the princess with her children. And, you know, people who study AI maybe will be able to determine, okay, yeah, this was created with generative AI tools, but maybe they'll say we actually can't say one way or another. That is the truly scary moment, but is this a step on the road to get there? Absolutely. Yeah. I mean, for me, the one thing that surprised me is just how quickly

quickly people jumped to skepticism when this photo was released. It feels like sort of in the span of like 10 years, we've gone from pics or it didn't happen to like pics and I'm going to study the pics to tell you why it didn't happen. It's like the mere existence of photographic evidence is not enough to assuage people's concerns about something being real or fake or not. In fact, in this case, the

putting this photo out just fueled the speculation more. Absolutely. Now, one, I think, funny subplot here, Kevin, and I wonder if you have an opinion on this, and it is, does the Princess of Wales use Photoshop? Like, some people saw the statement and said, that's absolutely ridiculous. If you're the Princess of Wales, there's no way you're going to sit down and learn how to use Photoshop.

I can sort of see it from the reverse, though. You're cooped up in that palace all day. You have your ladies-in-waiting taking care of most of the household affairs. Maybe you shoot a few cute pictures of the kids and you say, oh, I don't like the way that my daughter's sweater looks. I'm going to see if I can clean that up. So in a way, I find it totally plausible that the princess would learn how to use Photoshop for fun. What do you think? Yeah, I think if you had told me that, like, you know, the king who's...

Who's elderly. Yes, King Charles was using Photoshop. I would have said, I'm going to need to see some more proof of that. But, you know, Kate Middleton, she's in her 40s. She's a mom. Moms like to edit photos of their kids. You know, have I edited a photo of my kid ever to, like, you know, remove some crud from his shirt? Yeah, you know, I'm guilty. All right, so this is sort of a coin flip. We think whether Kate Middleton knows how to use Photoshop. Yeah, I can see it. I can also see reasons for skepticism there.

So another argument that I thought was interesting that I wanted to talk to you about today is something that Ryan Broderick wrote about in his newsletter, Garbage Day, in a post that was titled, Misinformation is Fun, where he's basically saying, look, we now, you know, this happens all the time. Something comes out. People get upset or nervous about it. They accuse it of being fake. We get all these expert researchers and reporters sort of coming out to fact check it and say, actually, this is fake. This isn't true. But his basic thing is like, look,

People are missing that this stuff is fun. It's fun to speculate. It's fun to spread rumors. It's fun to try to connect the dots on some complicated conspiracy theory. This is a piece that people miss when they write about conspiracy theories, as both you and I have done over the years. Yeah, and it's important because all of these platforms that seek out, often for good reasons, I think, to want to eliminate misinformation, are fighting an uphill battle. And the uphill battle is...

Their users love this stuff. Their users want to spend time on their platforms arguing incessantly about the fate of Kate Middleton. Right. And do you think that the platforms have a responsibility here? I mean, in this case, this was not a platform story. This photo was disseminated. I guess it was put on Instagram and maybe other social media networks. But it was really the photo wires and the photo agencies that

stepping in and issuing this kill order that really turned the volume up on this story considerably. So what do you think this says about sort of who is responsible for gatekeeping here and telling whether an image is fake or not? Well, the photo wires here are a great example of an

institution that does still have some authority and does still have some trust in it. And those are becoming fewer and further between in this current world. So I'm very grateful that we have folks like that who can come in and say, oh, yeah, this is obviously doctored. Get it the heck out of there. There will probably be examples like in our election, for example, where that just is not the case. And there is no authority that can come in and sort of say definitively one way or another this was doctored or not.

Yeah. And I just think this whole discussion about doctored imagery is going to get so much harder as more and more cameras just come by default with AI tools installed in them. So like, you know, five years from now, is it even going to be possible to take a quote unquote real photo? Or is every camera and smartphone on the market going to have some kind of AI image processing or improvement built into it?

I've actually hired an oil painter just to sort of create my likeness because it's the only way that I can trust that I'm seeing my own face, Kevin. I like that. So Casey, I remember a few years ago when I was doing a lot of reporting on crypto and sort of blockchain projects, one of the things that people would pitch to me periodically in this space is like, here's a way to use the blockchain to keep a kind of un-

editable version of the metadata to tell the provenance of an image so that you can have a record on the blockchain that says this image is real. It was not doctored or manipulated in any way. And here's how anyone can go prove it. So does this scandal and the associated drama make you think that something like that is actually necessary?

So look, I don't like solutions that are on the blockchain. I'm not going to say that no one could ever come up with a way to do that that would be fast and efficient and worthwhile. I don't think it's possible to do that today. But there are initiatives to try to verify the authenticity of images on the internet.

So there's something called the Coalition for Content Provenance and Authenticity. This is a consortium of a bunch of tech companies, including Adobe, Google, Intel, Microsoft, and they are trying to come up with some kind of standard so that you can embed in your photo the idea that this image was taken with a camera and it was not just spat out by an AI generator.

Now, even in a world where that exists, people are still going to share these photos on social media. They're still going to have endless debates. But it does empower gatekeepers, right? If there is some image that for whatever reason is like playing a role in an election and a meta or a YouTube or maybe even a TikTok, if that still exists –

If they can look at the metadata and say, oh, yeah, this was just obviously created with generative AI, maybe then they're able to attach a warning label to it. Maybe then they're able to fact check it. And that's really useful, right? Newspapers, other journalistic outlets will be able to do the same thing. So it's still a little bit tricky. Can you actually come up with a metadata standard that isn't easily removed from the image? There's stuff to be figured out, but definitely

if you want to know how do I think we will solve this problem, it's going to look something like that. Yeah. So a lot of people are saying like this incident has told them or informed them that we are sort of headed into this post-truth dystopia. I actually took a different lesson from it. This whole thing has sort of made me more optimistic because it has showed that people actually do care what's true. Hmm.

People actually do want the stuff that they are relying on that's in their social media feeds. They do actually care whether or not it's realistic or that it represents reality. And they are willing to go to extravagant lengths, including picking apart pixel by pixel photographs of the royal family to determine whether what they are looking at is real or not.

I think that's a really smart point because I do think that there is a kind of defeatism that creeps into these discussions. It's like, oh, we're going to have an infopocalypse and we'll never know what's real anymore. But I think what you said is exactly right, that we have a profound need to know what is true and false. And people are clearly ready to volunteer a significant part of every day to figuring out what is true or false if the story is important enough. Yeah, I think

we should like have a sort of a coalition of amateur sleuths like instead of like picking apart these photos well you know maybe that's a good use of their time for one week but but these people clearly have time on their hands they clearly have expertise in digital sleuthing let's put them to work doing something more socially beneficial absolutely have them solve some cold cases yeah take a lesson from encyclopedia brown nancy drew the hardy boys basically everything i read when i was nine those kids were onto something

When we come back, why your car is snitching on you. It's driving me crazy. I'll allow it.

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. Kevin, this podcast needs an infusion of cold, hard cash. Cashmere Hill. Yes, today we're talking with my colleague, Cashmere Hill, who writes about technology and privacy for The New York Times. She's got a...

a new story out and it's a banger. This is one that really caught people's attention and for good reason because the more of this story that you read, the higher your blood pressure goes. It's true. This is a story about cars and all the data that cars collect about their users and drivers and

and how cars have become kind of a privacy nightmare. It's a really good story. It's about some of these new programs that car companies have installed in their cars that allow them to remotely collect data and then not just keep that data for themselves, but actually sell that to places like insurance companies, which can use it to say, well, Casey's a very bad driver. He braked 72 times yesterday for some reason, so we're going to raise his premiums. That famous sign of being a bad driver, braking. Yeah.

The broader point is that cars are now basically smartphones on wheels. They're snitches on wheels is what they are. And they are being used to keep tabs on the people who drive them increasingly and with all kinds of consequences for consumers. Yeah, you truly may have been roped into one of these schemes without even knowing it. And so if you have your car connected to the internet in any way and you haven't yet read Cash's story, I promise you this is one you're going to want to listen to. And when Cash started looking into this, she learned about this whole sort of hidden world of...

shady data brokers and companies that are selling your data from your car to insurance companies. And today we wanted to talk to Cash about what she found out in her reporting and what she thinks is going to happen next, if there's any hope for us in this new world of connected cars or if we're all just destined to be surveilled and snooped on by these things that we drive around. So today...

We're turning hard fork into card fork. That doesn't work at all. Even a little bit. God. God.

Cash Hill, welcome back to Hard Fork. Thank you. It's wonderful to be on this award-winning podcast. Thank you. Thank you. We did win an award. Didn't you guys win the Oscar for best technology podcast earlier this week? Is the iHeart Podcast Award the Oscar of podcast awards? Many, many people are saying it is. In that case. Yeah, it was us versus Oppenheimer. Congratulations. So, Cash Hill.

Let's talk about this story. When did you decide to write about data collection in cars and why?

So I was spending a lot of time kind of lurking on online car forums, like forums for people who drive Corvettes and Camaros and Chevy Bolts, which I drive. And I started to see people saying that their insurance had gone up. And when they asked why, they were told to pull their Lexus Nexus consumer disclosure file. Like,

LexisNexis is this big data broker and they have a division called Risk Solutions that kind of profiles people's risk. And when they did that, they would get these files from LexisNexis that had hundreds of pages, including every trip that these people had taken in their cars over the previous six months, including how many miles they drove, when the trip started, when it ended, how many times they hit the brakes too hard, accelerated rapidly, and sped.

And when they looked at how Lexus Nexus had gotten the data, it said the provider was General Motors, who, you know, the company that manufactured their cars. Right. So your story starts with this anecdote about this man named Ken Dahl, which, by the way, great name. And he is a 65-year-old Chevy Bolt driver. And like you, he owns a Chevy Bolt, or I guess he drives a leased Chevy Bolt. Right.

And his car insurance went up by 21% in 2022. And he was sort of like, what the heck? Why are my premiums going up? I've never been responsible for a car accident. He goes looking and he asks for his Lexus Nexus report and gets back a 258-page document.

detailing basically his entire driving history. So does he then make the conclusion that this is why his premiums have gone up because he's a bad driver? Well, he says he's a very safe driver. He said his wife is a little bit more aggressive than him. Sure, blame the spouse. Blame Barbie. And she also drives his car. And yeah, he said that the trips that she took

which during the weekdays, he said, when he doesn't usually use the car, had a few more hard accelerations, hard brakes. And yeah, it looked to him like this is why his insurance went up. And we should say it's like just because you accelerate hard in a car doesn't necessarily mean that you did anything wrong. Right. And if you had to brake really hard, that also might not have been your fault. Right.

And so I think one of the things that's infuriating, Cash, reading your story, is that this data, which lacks a lot of really important context, is being hoovered up, often without the knowledge of the people involved, and then being used to gouge them on price. But I really was just struck by the way that these insurance companies were so eager to use data that might not actually be incriminating. Right. And LexisNexis said they don't actually give the

trip data to insurance companies, they give a score that LexisNexis gives the driver, a driver score based on that data and that that's what they're sharing. But interestingly, they didn't give Ken Dahl his score, so he doesn't actually know what his score is. But that's like such a

corporate thing to say is like, oh, don't worry, we're not giving the individual data. We've created a mysterious, impenetrable black box and handed that to the insurance companies. But, you know, just trust us. It's actually really well done. Well, the other company, Verisk, did say we just give all the trip data and a score. So let's talk about, like, how widespread is this? Which cars are sending data about their drivers to insurance companies? What's

Which companies are involved in this? Like, is this sort of an industry-wide practice or is this just like GM and Lexus Nexus? Is it pretty contained? So if your car has connected services, like if you have a GM car and you have on OnStar or a Subaru and you have on Starlink, your car is sending data about how you use it back to the auto manufacturers. At this point, the only ones that I know of who are providing it to insurance companies are GM, Kia,

Honda, Hyundai, Mitsubishi, these are companies that, Subaru being an exception, Subaru says they only give odometer data to Lexus, Nexus. But the other companies all have in their apps now this driver scoring, driver feedback. With GM, it's called SmartDriver.com.

And if you turn it on, they give you feedback about your driving. Like, drive slower. Be gentle with the accelerator. Buckle your seatbelt. Drive faster. Take more risks. Change the music. Get out of the carpool lane. I'm trying to pass you. Stop looking at your phone, which is giving you all this feedback. But for people that turn this on...

They may not have realized it, but they were saying, yes, like a lot of these programs are actually kind of run by Verisk or by LexisNexis. They're the ones giving you the feedback, not the automaker. And so you're kind of just sharing it with them. This was not well disclosed. In the case of GM, it was not evident at all from any of the language. And a lot of people said that SmartDriver was turned on for their cars and they didn't turn it on. They didn't even know what it was.

And GM gives bonuses to salespeople at dealerships who get people to turn on OnStar, including smart drivers. So they may have been enrolled by the salesman when they bought their car. But for other people, if you turn this on, you're sharing your data. And when you go out shopping for car insurance and you're trying to get quotes,

A lot of the insurance companies will say, "Can we have permission to get third-party reports on you, like your credit file?" And when you say yes to that,

That releases all of that data to go over to the insurance company. And this is just – people did not realize this was happening. And that detail about salespeople being incentivized to enroll people, often without even fully informing people of what they're enrolling in, I think, is really important. Because at the end of the day, this product exists because it is essentially free.

free money for GM and these other car manufacturers, right? Like, I think in your story, you say they're making millions of dollars a year by selling this data. And is this really anything more than a cash grab? I mean, the car companies say that this is about safety, that they're trying to help people be safer drivers with this, like, you know, driving coach. But, you know, for some of these people that SmartDriver turned on, they didn't even know it was on. They're not getting the feedback. And, you know, as you say,

General Motors, my understanding is they make in the low millions of dollars per year with this program, which they described to Senator Edward Markey. He asked them, are you selling data? And they said, you know, the data that we sell that we commercially benefit from is de minimis 2%.

to our 2022 revenue. So for them, it's nothing. You know, this is... Right, it's a drop in the bucket. It's a drop in the bucket. This is not moving their overall finances. But it's not de minimis for the people who have to pay 20% more on their car insurance all of a sudden for reasons that they don't even understand. Right. So this is... I think what we're seeing is, you know...

surveillance capitalism model. You know, the Google, the Facebook, you get something for free, you're paying with your data. It's really spreading to all of these other companies. And the automakers are like, well, we're getting all this data. We can monetize this too. Right now, they're not actually making that much money. Like, low millions is small. And some of the automakers told me, we don't get paid for sharing the data. We only get paid when an insurance company buys it. So,

they don't even have a good business deal on this. Like, they should be getting money for all the data. Like, there is something really damning about saying to the senator, it's like, hey, we don't even make all that much money off this. Okay, well then why are you violating the privacy of your entire user base if you can't even get a good price for it? Yeah. One question I have for you, Cash, is like,

How many other uses are car companies finding for this data? Are they just selling it to insurance companies to raise people's premiums? Or, I mean, I can imagine a situation where, you know, a company might like to know, you know, which drivers are driving past their store every day so they can show them, you know, targeted ads on social media. How...

How many buyers are there for this kind of car data? I mean, look, there's a lot of information that's flowing out of your car and a lot of potential buyers. At this point, what I mainly have been focusing on is this insurance thing. And when it comes to the insurance data, the one thing that all the automakers pointed out is that they're not providing location details properly.

That's just when you started the trip, when it ended, how far you drove. And it doesn't actually include location data. But that's not to say the companies don't have it. They're just, in this case, not selling that. Well, so, you know, this is in some ways like not a new phenomenon. I mean, insurance premiums

are, uh, you know, they vary based on things like where you live and how new your car is. And like, are you a, you know, a, a young man who is statistically more likely than someone who's older to be in an accident? Um, those kinds of things are used to change the prices of insurance premiums all the time. And I guess from the insurance company's perspective, this is just one more piece of data that they can use to make decisions about how much of a risk someone is out on the road.

Did you hear any sort of principle defenses of this while you were reporting the story from the insurance companies or the companies that sell data to them? - What the audit makers really focus on is that they set up these programs to help people get discounts. - God. - So like some of the programs, like Honda's for example, if you turn on driver feedback,

and then you have a good score, the actual app will kind of offer to connect you with some company who's going to give you a 20% discount. And so they're really focusing on, we're trying to help our customers and get them discounts. What they're not talking about is when that data is flowing out and it's hurting their customers. Like I talked to this one Cadillac driver who lives in Palm Beach, Florida. And in December, it was time for him to get new insurance. And

And he got rejected by seven different companies. And he was like, what is going on? Like they just wouldn't sell him insurance for any price. They would not cover him. And his auto insurance was about to expire. And he said, like, what is going on? He orders his Lexus Nexus report and he has six months of driving data in there. And he says...

He says, like, look, I don't consider myself an aggressive driver. I'm safe. But he's like, yeah, I like to have fun in my car and I brake a lot and I accelerate. Like my passenger's head isn't hitting the dashboard or anything like that.

But yeah, I speed. Can you tell whether you're doing donuts in the parking lot? Because I actually would like to know that. But he says, look, I've never been in an accident and I couldn't get insurance. He had to go to a private broker and ended up paying double what he was paying before for insurance. So, you know, it really, in that case, hurt him a lot. So, you know, here's where I guess I can, you know, maybe feign some sort of sympathy for the idea of doing this, which is like,

I do want worse drivers to have higher insurance premiums, right? Like, I think that is how we want the insurance market to work. I think if you're a good driver, your insurance should be lower. And the best way, I don't know who is a good driver and who is a bad driver, is to monitor them obsessively.

But what you have revealed here, Cash, is that once we implemented this sort of surveillance system, it seemed to do what all surveillance systems do, which is needlessly penalize innocent people. Right. So like we have all of the downsides of a surveillance system with really none of the upside. I, too, want safer roads. Casey, I get annoyed at aggressive drivers.

And I talked to this one law professor from the University of Chicago, and he said, you know, usage-based insurance, that's what you call this when you tell an insurance company they can watch you, they can see you're driving. He said, it works. He said, the impact on safety is enormous and that people drive better when they know that they're being monitored and that they're going to

pay more, you know, if they drive aggressively or or unsafely. But that's not what was happening here. People were being secretly monitored and then they're paying more and they don't know why. And that is not going to make the roads any safer. Yeah, that that does feel like the stickiest part of this to me is like

The disclosure piece, like if you know, you know, I've I've had experiences in the past couple of years, like, well, I'll go rent a car if I'm on like a work trip or something. And part of what I know when I'm renting the car is that the rental car company is tracking that car.

And I know this because they tell you when you sign up and it's very clearly disclosed, like, you know, we will track this car. You know, if it gets stolen or something, we can help you track it down, that kind of thing. So I know that I'm being monitored while I'm driving a rental car. And so I, you know, I do tend to drive a little bit more conservatively in a rental car. I can imagine that expanding to lots of other cars. But

The people have to know that they're being monitored in order to be able to drive safer as a result of being monitored. Absolutely. So, Cash, talk about your reporting a little bit on this. So you started looking through these car forums. You started seeing evidence that people were having their premiums raised as a result of this surveillance by their

cars. When you approached the car companies, the data brokers, the insurance companies, did they try to deny what was going on? Were they pretty open about it? How did they react? You know, I thought that I was expecting denials. I was expecting that, yeah, they would say this wasn't happening because it just seems so shocking to me that they would be doing this.

But they ended up kind of confirming it. But there was some evasive language about how it worked. One of the big things I was asking different companies is, where do you disclose this is happening? And with GM, the spokeswoman said, it's in the OnStar privacy policy in the section called... Which everyone reads before they click accept in its entirety. In the section about...

sharing data with third parties. And so I go and read that section and the section doesn't say anything about LexisNexis or Verisk or Telematics, which is what you call this driving data. It says like, you know, if they have a business deal with somebody like SiriusXM, which is the company they named there, SiriusXM is going to get some data from your car. And I just was very shocked that there was nothing more explicit anywhere. And I actually, I told you I have a Chevy Bolt

So I went to the MyChevrolet app. I connected my car to the MyChevrolet app and went through the smart driver enrollment. And all it says is, like, get digital badges. Like, you can get, like, break genius and limit hero badges.

Brake Genius! One of my favorite bands from the last year. I'm putting that on my LinkedIn profile. Certified Brake Genius. Get driving tips. And there's just absolutely nothing that would make you realize that as soon as you turn Smart Driver on...

that General Motors is going to start sharing everything about how I drive my Bolt with Lexus Nexus and Virisk and whoever else I didn't find out about in my reporting. It should just show you like the splash screen of a Panopticon and it should say, is this the future you want? Just tap yes to continue. Well,

Well, I just, I really do think like every company wants this model now. They're just thinking about how can I get an extra, you know, revenue stream through monetizing the data of my customers. And this is not just automakers. This is just anything we're buying now that's internet connected.

I mean, what it made me think of when I read your story was TVs, because a very similar scenario has been happening with smart TVs, which collect all kinds of data about what people watch on them, and then they can sell that data to advertisers.

So it is actually in some cases, and I bought a new TV a few years ago and I went through this process of realizing that it is actually cheaper to buy a smart TV than a non-smart TV in many cases because part of how the smart TV makers are making

making money is not through selling you the hardware. It's actually through capturing the data and selling the data. So it is sort of, we do sort of have this phenomenon where as hardware, any hardware, whether it's a car or a TV or a refrigerator or a smart toaster or something, as it becomes more connected and more like, you know, a device in its own right, that's

the data actually in some cases becomes more valuable than the actual piece of hardware. Well, I mean, Cash, don't we just see this all the time that privacy is just increasingly a luxury good for rich people to pay for? Yeah, I mean, I guess so. But even rich people, I mean, they're buying expensive cars and their cars are still sending data back about them. I mean, that's one objection I saw from drivers, like people at General Motors. They said, hey, I paid a ton of money for this car. If you're going to sell my data, I want a cut of it. Right.

Yeah. Here's how I would solve this problem. I think that each manufacturer should be allowed to make one car that just sort of sends all your data everywhere and there's nothing you can do about it. You can just sort of choose. So if you buy a GM snitch, you know that that's what's going to happen.

It should cost a hundred dollars. Like it should be the cheapest car on the market. It should cost a hundred dollars. So my little GM snitch, it has a direct line to the police. Whenever I like sort of cross over the center divider. And other than that, knock it off. Yeah. Cash. What is the response been to your story? Is our, our,

Are lawmakers outraged? I'm outraged. Are drivers sending you stories about being spied on? What is the reaction? So I'm definitely hearing from lots of other drivers who are discovering that they had some of these features turned on. They didn't know it.

and they're turning it off. I did a kind of like news you can use box at the bottom of the story. And I said, here's how to figure out if this is what your car is doing. And one of those was, there's this website called the Vehicle Privacy Report that you can go to and it'll tell you, you put in your VIN number and it tells you what your car is capable of collecting. So the person who runs that site said, like, I've had tens of thousands of people come and do it off of your story. I included the website

link to LexisNexis to go request your consumer disclosure file. And not just for auto data, that file is crazy. It had me associated, I mean, it had tons of pages for me, had me associated with my sister's email address from middle school.

And in 1990, I was like, why? Why? So I think everyone should request that, you know, request their Verix file. And, you know, I talked to Senator Edward Markey for the story, and he's been very interested in what data is being collected by cars and what automakers are doing with it.

And he said, when I described to him what GM had done, he said, this sounds like a violation of the law that protects us from unfair and deceptive business practices. So I'm sure there's going to be more to come from this story. Yeah. And what can drivers do if they are worried that their car is snooping on them and sending data to a data broker or to their insurance company to raise their premiums?

Um, what should they actually do to prevent that? Or are there certain car makers who are not collecting this kind of data? Um, what can the average driver do? I mean, there, there are, I can tell you from my time in the car forums. I mean, there's some people that don't want their data going out from their car. So they, they hack it basically. They like turn off the connected services. They make sure that data can't leave their car. Um,

I mean, if you sign up for connected services, you are, you know, connecting your car back to the auto manufacturers, you know, cloud servers or whatever. It's sending data. So just turning that on, yeah.

that data is getting sent back. And that's why a lot of these companies, you know, when you buy their car, they're like, oh, you get this for 30 days for free. And so most people turn it on. And then even if you don't pay, you're still connected after that. Wow. Wait, so even... So they get you to connect it and then your free trial runs out, but they still keep collecting the data about you that they can sell. That's my understanding. And that's what you agreed to when you, you know, read the...

50,000 word privacy policy. Good Lord. Wow. See, I would at least like my car's surveillance data to be helpful to me in some way. I would like it to pop up a little notification and say, this is the third time you've driven through McDonald's in the past week. Are you okay? Is something going on in your life? Do you need therapy? Yeah.

Last question, Cash. Do you think this will create a bull market for used cars that don't have any of this stuff in it? Like, are we going to see people, you know, running out to the car lots to buy like the 1985 Ford Bronco that doesn't have any technology in it? I mean, this was the basic premises of the Battlestar Galactica reboot, by the way, is that the only spaceship that survived was the one that was not connected to the space internet. And so that when the sort of AI Cylons attacked Cylons,

Only Battlestar Galactica was safe. Wow. That's true. And I have seen a lot of people commenting in that way. They're like, oh, I'm so glad I still have a car from 2009. If you've got a CD player in your car, it is privacy protective.

Yeah, I am going to go back to the Flintstones car that you have to pedal with your feet. I don't think that was collecting much data on its drivers. All right, Cash Hilt, thanks so much for joining us. Thanks, Cash. My pleasure. I'm all worked up now. Do you have a car, Casey? No, I don't even have a car. Casey just has strong feelings for unapparent reasons. ♪

Thank you.

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.

Hard Fork is produced by Davis Land and Rachel Cohn. We're edited by Jen Poyan. Today's show was engineered by Alyssa Moxley. Original music by Alicia Beitube, Marion Lozano, and Dan Powell. Our audience editor is Nelga Locley. Video production by Ryan Manning and Dylan Bergeson.

Go check out what we're doing on YouTube. You can find us at youtube.com slash hardfork. Special thanks to Paula Schumann, Pui Wing Tam, Kate Lepresti, and Jeffrey Miranda. You can email us at hardfork at nytimes.com. Especially if you know where that princess is. Yeah, please tell us. This podcast is supported by Meta. At Meta, we've already connected families, friends, and more over half the world. To connect the rest, we need new inputs to make new things happen.

And the new input we need is you. Visit metacareers.com slash NYT, M-E-T-A-C-A-R-E-E-R-S dot com slash NYT to help build the future of connection. From immersive VR technologies to AI initiatives fueling a collaborative future, what we innovate today will shape tomorrow. Different builds different.