Home
cover of episode How Tech Journalists Are Fueling the AI Hype Machine

How Tech Journalists Are Fueling the AI Hype Machine

2024/5/29
logo of podcast On the Media

On the Media

Chapters

Shownotes Transcript

This episode is brought to you by Progressive. Most of you aren't just listening right now. You're driving, cleaning, and even exercising. But what if you could be saving money by switching to Progressive?

Drivers who save by switching save nearly $750 on average, and auto customers qualify for an average of seven discounts. Multitask right now. Quote today at Progressive.com. Progressive Casualty Insurance Company and Affiliates. National average 12-month savings of $744 by new customers surveyed who saved with Progressive between June 2022 and May 2023. Potential savings will vary. Discounts not available in all states and situations.

Listener supported. WNYC Studios. Hey, it's Micah. This is the On The Media Midweek Podcast. Happy Memorial Day. Hope you did something fun or relaxing. Hope you got to be outside a little bit. We've found sometimes that over holiday weekends, our listens dip a little bit. And so you might have missed a piece I did on the most recent show about...

kind of lazy tech journalism and how reporters just time and time again fall for whatever Silicon Valley is hawking. They did it with the gig economy, and now they're doing it with artificial intelligence. We were pretty proud of the piece, and so we're going to rerun it for the pod extra. Enjoy.

Last week, OpenAI released a demo of its latest technology, its text-based software, ChatGPT 4.0, which responds to prompts and now has a new voice. A few, actually, but this one, called Sky, got the most attention. You've got me on the edge of my... Well, I don't really have a seat, but you get the idea. What's the big news?

People online said the demo reminded them of a 2013 film about a man who falls in love with his AI voice assistant, performed by Scarlett Johansson. Good morning, Theodore. Good morning. You have a meeting in five minutes. You want to try getting out of bed? Get up! You're too funny. Within hours of the demo's release, OpenAI's CEO Sam Altman tweeted the word HER, the name of that very film.

Which, by the way, he has publicly described as an inspiration for his work. Then, days later... The actor says she turned down the offer to be the voice of the artificial intelligence system and that they made one that sounded just like her. Johansson said Altman approached her eight months ago and she turned down his offer to lend her likeness to the software.

He approached her again just two days before the release of the demo. She said, I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. In response to requests from Johansson's lawyer, OpenAI has said they're discontinuing the voice they called Sky. But

But the company maintains they hired a voice actor for the job before approaching Johansson and made no attempt to emulate the actor.

The debacle emphasized how these large language models often rely on human labor and data, often taken without permission. Despite its problems, so many AI boosters in Silicon Valley and members of the press say that artificial intelligence holds the keys to a shining future. We may look on our time as the moment civilization was transformed, as it was by fire, agriculture, and electricity.

Oh man, when the AI coverage started, I thought, here we go again. This is the same old story. Sam Harnett is the author of a 2020 paper titled Words Matter, How Tech Media Helped Write Gig Companies Into Existence. I wrote it because I was really disappointed with the coverage I was seeing and some of the coverage I ended up doing. Today, Sam hosts a podcast called Ways of Knowing.

But back in 2015, he was a tech reporter for KQED in San Francisco, filing stories for Marketplace and NPR. I was a young reporter. You got to do these quick stories. And before you know it, you're using all these words like startup or tech or platform. I started thinking these words themselves are misleading, like ride share for an Uber. Like, what are you sharing? Like you're paying someone to drive you around. You're not sharing anything, you know?

These euphemisms were pushed by the tech industry and quickly adopted by the press during the early days of the gig economy. In his paper, Sam listed off several styles of media tropes that defined that era, like the first-person review. He points to a Time magazine cover story titled, "'Baby, You Can Drive My Car' and "'Do My Errands' and "'Rent My Stuff.'"

And those experiential first person stories, they're not critical at all, right? It's all about how you're engaging with this thing and what it's like. And even when they are critical, you're still sort of giving them a lot of free advertising by casting it as a totally new thing.

Yes, but on the consumer side, you could see where your car was before it got to you. You could see who the driver was. You could know how much it was going to cost. You didn't have to give cash to a stranger in a car. Right. That's innovation, no? Well, I mean, you look at Uber and Lyft, they're using GPS and phones. I mean, GPS had been around for decades. Phones were relatively new, but Uber and Lyft didn't invent the phones.

Really, the innovation seemed to be ignoring local transportation laws and ignoring labor laws. And it was all being cast as techno-utopianism, this inevitable future of work. It's a mass transit revolution sparked by the Universal Ridesharing Company that goes by only a block letter U on its windshield. Of course, we're talking about Uber. I hope that all regulators will take the time to understand that most of these

These drivers greatly value the freedom and flexibility to be able to work whenever and wherever they want. The industry wants those drivers to stay independent contractors. That's cheaper for those companies. It's also at the core of their business. So what Uber does, this is the future. It is the sharing economy. The marketplace will win, but we've got to support...

But really, it was the passive work. I think it was talking to a lot of taxi drivers and realizing that this is work that has no social safety net. This is work that has no overtime. There's no guaranteed minimum wage. Work that's undoing protections that were hard fought 100 years ago. Meanwhile, some outlets focused on what Sam Harnett calls the outlier worker profile.

CNBC wrote about 38-year-old David Feldman, who, quote, quit the rat race and left his finance job to make six figures picking up gigs on Fiverr, a site that connects customers with freelancers. The Washington Post ran a story titled Uber's Remarkable Growth Could End the Era of Poorly Paid Cab Drivers.

which cited these claims from the company. The people that drive their taxis barely break even, whereas someone who drives an Uber can make a net $90,000 a year. The median pay for Uber drivers in New York City, $90,000 a year for a 40-hour work week. Wow, that is...

the same as a post-secondary science teacher and a financial analyst here. That's a lot of money. Claims that landed Uber in court. The Federal Trade Commission will send nearly $20 million in checks to Uber drivers. This is all part of a settlement with the ride-hailing company. The FTC found Uber exaggerated the yearly and hourly income drivers that they could make in certain cities.

Instead of pressing Silicon Valley executives on how these companies were, say, misleading workers, many journalists did uncritical interviews. They were threatening to sue you, right? They were threatening to shut us down. Host Guy Raz in 2018, interviewing Lyft CEO John Zimmer for NPR's podcast, How I Built This. The opportunity was massive, and the regulatory obstacles were just as massive. How long did it take for you to overcome...

those initial regulatory challenges? Like, was it months, years? I'd say at least a year, probably for that first year. They cast the people running these companies as heroes who overcome adversity. Sam Harnett, who create a thing that the listener wants to succeed. It's kind of astonishing how the tech industry kind of keeps finding ways to get lots of media coverage that ends up turning into lots of investment and lots of power.

Speed is imperative. And if they can get up and running quickly enough, and if their business model can become a thing that's regularly used by consumers and embedded in society, then they become too big to regulate. I think we see it with a lot of new technologies, whether it's the gig economy, whether it was with crypto a few years ago, whether it's AI.

Paris Marx is the host of a podcast called Tech Won't Save Us and the writer behind the Disconnect newsletter. We often see these very rapid embraces of whatever the next new thing from the tech industry is and less of a desire to really question the promises that the companies are making about them. Marx agrees that some of the same media tropes that Sam Harnett identified are recurring now with AI, like the first-person review.

After ChatGPT was released in November of 2022, the companies were selling that we were potentially even closer to computers matching human-level intelligence. And one of the things that we saw a lot of media organizations doing was actually going on to ChatGPT and having conversations with it. And there's a really striking example of this that was published in The New York Times by Kevin Roos, their tech journalist.

And he basically had this two-hour conversation with this chatbot, which he said wanted to be called Sydney. It had its own name. It was telling him that it wanted to be alive and was ultimately asking Roos to leave his wife and have a relationship with the chatbot.

and the way that it was written, it was ascribing intentionality to this chatbot. It was thinking, it was having these responses, it was feeling certain things, when actually we know that these chatbots are not doing anything of the sort, right? The science fiction author Ted Chiang basically called these chatbots autocomplete on steroids. You know, we're used to using autocomplete on our phones when we're texting people, you know, it's suggesting the next word, and this is just taking it to a new level. The fact that a nascent

chatbot with millions of dollars of funding behind it would say such outrageous things. Is that not in and of itself newsworthy, even if the chatbot's own claims about its human-like intelligence were just outright wrong?

I think it definitely can be, but then the question is, like, how do you frame it and how do you explain it to the public? This was February of 2023. ChatGBT was released at the end of November of 2022, so we were still really early in the public's kind of getting to know what this technology was. It really misleads people as to what is going on there. Another trope that Harnett lays out in his paper is his discussion of the founder interview. Today, we've seen...

so many fawning conversations with tech leaders who are at the forefront of artificial intelligence. Yeah, absolutely. One of the ones that really stands out, of course, is an interview that Sundar Pichai, the CEO of Google, did with 60 Minutes back in April of 2023. And in this interview, Sundar was talking about how these AIs were a black box and we don't know what goes on in there. Let me put it this way. I don't think we fully understand how a human mind works either.

One of the biggest problems there was not just what Sundar Pichai was saying, but that the hosts of the program who were interviewing him and conducting this were not really pushing back on any of these narratives that he was putting out there. Of the AI issues we talked about, the most mysterious is called emergent properties. Scott Pelley of 60 Minutes. Some AI systems are teaching themselves skills that they weren't expected to have. For example...

One Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know. After the piece came out, AI researcher Margaret Mitchell, who previously co-led Google's AI ethics team, posted on X saying that according to Google's own public documents, the chatbot had actually been trained on Bengali texts.

Meaning this was not evidence of emergent properties. Here's another exaggeration that made its way into a TV news piece. The latest version, ChatGPT-4, can even pass the bar exam with a score in the top 10%. And it can do it all in just seconds. ChatGPT-4 scored in the 90th percentile on the bar exam. Was that legit?

Yeah, so that claim was debunked recently. Julia Angwin is the founder of Proof News. She recently wrote an op-ed for The New York Times titled Press Pause on the Silicon Valley Hype Machine. An MIT researcher basically re-ran the test and found that it actually scored in the 48th percentile. And the difference was that

When you're talking about percentiles, you have to say who are the other people in that cohort that you're comparing with, right? And so apparently, OpenAI was comparing to a cohort of people who had previously failed the exam multiple times.

OpenAI compared its product to a group that took the bar in February. They tend to fail more than people who take it in July. And so when you put it compared to a cohort of people who had passed it at the regular rate, then you got to this 48th percentile.

The problem is that paper comes out, it's peer reviewed, and it's like goes through the academic process. It comes out like a year later than the claim. Tell me about Devin. This is a red hot product from a new startup that claims to be an AI software engineer. Can it do what its creators claim it can do?

So Devon is from this company called Cognition, which raised about $21 million from investors and came out with what they called an AI software engineer that they said could do programming tasks on its own. The public couldn't really get access to Devon, so there wasn't anything to go on except these videos of Devon supposedly completing tasks. I'm going to ask Devon to benchmark the performance of Lama and a couple of different API providers.

From now on, Devin is in the driver's seat. And the press wrote about it as if it was totally real, right? Wire did a, you know, forget chatbots, AI agents are the future with the headline. Bloomberg did a breathless article about how these programmers are basically writing code that would destroy their own jobs. And

There was a software developer named Carl Brown who decided to actually test the claim. I have been a software professional for 35 years. Here's Carl Brown on his YouTube channel, Internet of Bugs. For the record, personally, I think generative AI is cool. I use GitHub Copilot on a regular basis. I use ChatGPT, Lama 2, Stable Diffusion. All that kind of stuff is cool. But lying about what these tools can do does everyone a disservice.

So he took one of these videos where Devin was aiming to complete a task and he tried to replicate exactly what was happening. He did the task in 36 minutes and the timestamps in the video show that it took Devin more than six hours to do the task. What Carl says is that Devin is generating its own errors and then debugging and fixing the errors that it made itself.

The company basically acknowledged it, actually, in tweets. They didn't respond to my inquiries, but they basically said, yeah, you know, we're still trying to make it better. But it was one of these things where it was a classic example of like, journalists shouldn't believe just a video that claims to show something happening without actually taking a minute to...

to even carefully watch the video or ask to have access to the tool themselves. If I started a company and raised millions of dollars in funding, I would be under a lot of pressure to prove to the public that it works. And you'd think that people who cover Silicon Valley understand that dynamic? Totally. But I mean, I will tell you that after my piece ran in the New York Times questioning whether we should believe all this AI hype,

A reporter at Wired did an entire piece basically trashing my piece. And the title of it was, We Should Believe the AI Hype. Really? Yes.

Okay. And what was their argument? Basically that in the future, I will be proven wrong because it will all get better. And that's sort of the company's argument too, which is like, don't believe your lying eyes. Believe the future that I'm holding up in front of you. I think for journalists, I don't think our role is to call the future. I think our role is to assess the present and

And the recent past. The recent past tells us that big tech is very good at generating hype in the press and using venture capital to grow really fast and influence regulators. I'm not predicting this will happen with AI. It's already happening. My worst fears are that we cause significant, we the field, the technology, the industry cause significant harm to the world.

Here's Sam Altman, CEO of OpenAI, testifying before Congress last May and discussing why he thinks AI needs to be regulated. I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to

to prevent that from happening. Just a month later, Time magazine revealed that OpenAI had secretly lobbied the EU to go easy on the company when regulators were drafting what's now the largest set of AI guardrails. Because he is treated as kind of the high priest of this AI moment.

because he had these compelling narratives that were being backed up by a lot of reporting. Paris Marx. He was basically able to convince European Union officials to reduce the regulations on his company and his types of products specifically. And that carried through to when the AI Act was finally passed.

All this while technology companies push the public along a path that they and members of the press say is inevitable. We know that generative AI, the chat GPTs, the image generators, things like that, are much more computationally intensive than the types of tools that we were using previously. So they require a lot more computing power. And as a result of that, Amazon and Microsoft and Google are in the process of doing a major build out of large hyperscale data centers around the world,

in order to power what they hope will be this major demand for these generative AI tools into the future. That obviously requires a lot of energy and a lot of water to power it.

I think we have paths now to a massive energy transition away from burning carbon. And so in this interview in January with Bloomberg, Altman actually directly engaged with that when he was asked about it. Does this frighten you guys? Because the world hasn't been that versatile when it comes to supply. But AI, as you have pointed out, it's not going to take its time until we start generating enough power. It motivates us to go invest more in fusion and invest more in new storage.

He said that we're actually going to need an energy breakthrough in nuclear technologies in order to power the vision of AI that he has. He didn't kind of hesitate and say, well, if we don't arrive at it, then maybe we won't be able to roll out this vision of AI that I hope to see, but rather that we're just going to have to power it with other energy sources, those often being fossil energy sources, and that would require us to geoengineer the planet in order to kind of

keep it cooler than it would otherwise be because of all the emissions that we're creating. The existential question I have about AI is, is it worth it? Julia Angwin. Is it worth having something that maybe sorts data better, writes an email for you?

at the cost of our extremely precious energy. And then also, AI is based on scooping up all this data from the public internet without consent. As Sam Harnett said, speed is imperative. It's why big tech is pushing some half-baked AI features.

As of last week, when you type a question into Google, you now see an AI-generated answer. Some people reported that the AI told them to eat rocks and put glue on pizza, which weren't presented as jokes, even though the info appears to have been scraped from Reddit and The Onion.

You know, there's this AI pioneer, Jan LeCun, who works at Meta. He's their leading AI scientist. And he recently tweeted out something I thought was so perfect. He said, it will take years for AI to get as smart as cats. And I thought, like, that's perfect. I should have just run that instead of my column.

Here's one last issue. When Google AI summarizes legit info from real news sites, there's no need to go to the original source.

meaning even less traffic for ailing media organizations. This is yet another reason members of the press should refrain from Silicon Valley boosterism. Janky new tools may be eating our lunch, but if the recipe was made by AI, we should probably wait to dig in. ♪

Thanks for listening to the podcast. Extra on the big show this week, we'll be discussing the rise of Donald Trump's social media platform, Truth Social. Look out for the show on Friday. You don't want to miss it. Thanks for listening.