This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.
I had a big technology victory this week. What was that? Which is that I finally used my cord box. Are you familiar with the cord box? Like a box where you store a bunch of cords? Yes. Every man of a certain age has a box in his house...
with 20 years worth of random cords in it. I do. And it sort of like charts the development of cord history. Exactly. And you just keep this box like deep in the closet. You never use it for anything. And then you die and you pass your cord box down to your heirs. And this is how cord boxes generally work. That's right. But this week I've been doing a technology project at home in my home office, which is I'm setting up something called a KVM switch.
A KVM switch. You don't need to know, but it's basically... No, I need to know. Okay, it's a keyboard video mouse switch. Okay. And it's basically a way for you to attach multiple computers to the same set of monitors and peripherals. Anyway, as part of setting this up, I found myself in need of a few cables. And I thought...
I've got a cable box. I'm going to go in there. I'm going to look for the cords. So I found them. I found HDMIs. I found display ports. I found adapters. It was like Christmas morning for myself. These are some of the great cords. And so I just want to say to all the people with cord boxes out there just collecting dust, keep collecting them. Just keep putting the cords in the box because you never know. Someday you will need them. You know, this story really struck a chord with me, Kevin. Oh.
I'm Kevin Russo, tech columnist at the New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, listeners tell us the wildest ways you're using AI at work. Then, legendary YouTuber Hank Green stops by to talk about how creators are reacting to the prospect of a TikTok ban. And finally, deep fakes are coming to Main Street. We'll tell you how one caused turmoil in a Maryland high school.
So Casey, about a month ago, we asked our listeners to send us examples, stories, anecdotes from their use of generative AI at work. That's right. Kevin, we spend so much time on the show talking about AI, what the companies are doing, what products they are making, how it might change the world. But honestly, it's
It is a lot more interesting most of the time to just talk to real people about what they are actually doing with this stuff. Totally. So we sent out a sort of call out asking people to send in their stories. And we got just an overwhelming number of them, like more than 100 responses came in from listeners. And to give you some sense of perspective, we normally only get that many emails when we make a grammatical mistake. It's true. Or when I say um or like too much. Um.
So we went through them all, looking for sort of themes or commonalities and just like interesting stories that stuck out to us. And today we are going to talk about them. Yeah.
So one of the most interesting things about the responses we got is just how wide the range was of things that our listeners reported doing and experimenting with AI at work. But people also just reported having a lot of feelings about the use of AI at work. Some of them were sort of scared or intimidated by AI. Some of them were sort of delighted by AI and how it helped them do more work or be more productive. And so today, I think we should just give people a taste of that
And it makes sense when you think about it, Kevin, because this is a general purpose technology. You go to chat GPT, it is a blank box. You can put anything in it. And so, of course, people are going to have a really wide range of experiences. But that's why I think it's so important to just go in and do that check and try to take the pulse of how people are using this stuff. Totally. So for this segment, we're going to bring you a few short stories that we got from some of our listeners about how they are using AI at work.
Some are going to be voice memos that people recorded and sent, some just through the emails they wrote to us. We're going to just react to and talk about those. Then at the very end, we're going to zoom out and talk about some broader takeaways about some of the patterns that are emerging about how people are or aren't using AI at work. All right. Well, shall we hear the first one? Let's do it. First up, we have a story of a listener who figured out a pretty creative way to use AI to get a client to make decisions faster.
Let's play the tape. Hi, I'm Alec Beckett, a creative partner at Nail Communications, a creative agency in Providence. Now, we have a client who's been really hesitant to make subjective decisions until his CEO has weighed in. But of course, the CEO is always busy and it's very hard finding time on our calendar. So we were seeing timelines starting to slip when Stephen, our director of strategy, came to me with an idea that I really loved. Let's make a synthetic version of the CEO.
So we trained a custom GPT with the CEO's latest strategic plan and as many of her speeches and blog posts and podcast transcripts as we could find. Then before we presented the next round of work to our client, we uploaded that presentation to our synthetic CEO and asked for an opinion.
The feedback was actually kind of amazing. It wasn't all positive, but frankly, it was exactly the kind of feedback we'd love to get more often from our clients. It's very cogent and strategic and clear. And it was really interesting to see how that sort of loosened up our client who felt like he was getting a version of his CEO's perspective. And it seemed to make him more willing to make some of these decisions and keep the process moving forward.
It did get us start to wondering, like, are synthetic CEOs the future? I mean, they certainly are cheaper. Yeah.
I love this. What did you think of this? Well, you know, it raises the question, like, aren't most CEOs kind of a little synthetic to begin with? It's true. The question of what is a CEO doing at any hour of the day, I think, has always been very mysterious. So the idea that you could come in and partially replace them with a chatbot, I think, makes some intuitive sense to me. Look, this is a very creative idea. This is a very fun idea. But of course, it also has me wondering, whatever feedback that these folks are getting from the chatbot, how
How closely does it actually mirror what the CEO would have said? Yeah, so Alec told us over email that he doesn't actually know whether the client ever told the CEO that they made a synthetic version of her. According to him, when they first brought this out, the client just kind of laughed and thought it was a novelty. But once they actually solicited feedback from the
AI CEO with the client watching in real time. He says that the client's tone shifted. And he also said that they're now regularly running their work by the synthetic CEO as part of their process with this client. So to me, I think this speaks to one of the most uncomfortable things about this technology, which is that it is often better at doing the kinds of work that managers and bosses do than the work of individual contributors.
Say more about that. Well, a lot of what managers, especially at big companies, do is kind of synthesizing. It's spotting patterns. It's taking data from across the company and kind of putting it together and projecting forward in some way. It's like it's this sort of
work of prediction and agglomeration and synthesis that AI is actually quite good at doing. Now, obviously, there are parts of a CEO's job that an AI can't do. A lot of that is sort of leadership and setting the tone and the agenda for an organization. But I think if a lot of crimes...
And leaders of big companies are honest with themselves. They'd find that, like, actually maybe this stuff is better at doing our jobs than some of the people who report to us. Here's my thought. My suspicion is that this chatbot probably is not doing an amazing job at mimicking the actual CEO of this company that they're working for. But I wouldn't be surprised if it was like...
a pretty good median CEO. And it's able to give like the median CEO answer. And for folks who are working on this sort of creative work, it is useful to get the median CEO answer, the median CEO feedback. And so for that reason, I say this seems like an interesting tool to put in the toolbox. Totally. It also just opens up a radical new avenue of like workplace conflict, which is that if your boss doesn't want to talk to you or hear your pitch and you're like trying to get a meeting with them, they'll just be like, well,
I'm a little busy right now, but you could talk to my AI clone and they'll give you some feedback on your project. I thought you were going to say that conflict is like, you know, you go to the real CEO and they hate your idea and you're like, well, don't look at me, the fake version of you love this.
That could happen too. All right. Next example. Our next voice memo comes from listener Jane Endicott. Jane is a freelance writer. She makes short form video scripts for a media outlet that she did not name. And her story stood out for how she has incorporated generative AI into literally every single part of her job.
I do the research. I write the news clip. I write the catchy headline and a banner headline that all used to be human work. And now it is completely I have a bot for every single part of my process. I have a bot that my my client made that scrapes news articles. So it sends me links and
I use another bot on a site called Poe that summarizes the URL. So I'll go to that website. I'll drop the URL into the pot on Poe and it will summarize that article for me. Then I have a separate bot that creates like a catchy headline. So to kind of give you some perspective, when I started doing this,
I think they had me on, I want to say maybe five to seven clips per day. Now with the process that I have, I'm creating 25 clips per day in three hours. So something that would have taken me 20 minutes can take me anywhere from three to five minutes. What I can do in an hour is like mind-blowingly more than it was in September 2022.
September 22, of course, right before ChachiBT came out. So, Kevin, what do we make of Jane's AI workflow? So I'm impressed by the extent to which Jane has managed to sort of automate certain parts of her job. But I am a little worried because it does sound like she's kind of automating herself out of a job gradually. It seems like if she can sort of give every part of her process over to a bot, eventually the people who, you know, who employ her might just say, well, why are we paying Jane? Then why don't we just have the bot do the whole thing? I mean, this,
kind of is the whole question, right? Because I think, you know, if the average manager here is, I'm using AI and it sort of helps me with the first 20 or 30% of every assignment, people say, oh yeah, that's going to make you a lot more productive. That's good. You know, we can sort of like, you know, focus your attention on more creative matters. Once AI is doing 60 to 80% of the job, I do think that's where the manager is like, wait, what exactly are you doing over here?
So, you know, I think there's just going to be maybe longer than we expect period of arbitrage, essentially, where a bunch of really savvy workers who know how to get the most out of AI are going to basically be living on easy street until their bosses catch up. But it's
It's also possible that bosses are going to catch up faster than I'm thinking. You know, I will say as a, as a creative worker myself, Kevin, I do bristle a bit at the idea of leaning this hard on, on these bots, right? Um, if you're making video, um,
My hope would be that you want to put a real personal stamp on that. You want to put some human ingenuity in that. You just don't want to sort of rely on the regurgitated writings of every human to come before you. But I don't know. How did you react to this one? So, yeah, I agree with that. I think there's sort of a commodity part of the media industry where you are basically just
like producing, you know, scripts or little short videos out of articles where you are essentially taking things from one format and putting them into another format. I think that is very low-hanging fruit for AI. All right, next up, an email from a listener named Rick Robinson. Should I read this one? Yeah. All right.
Rick wrote in to tell us that he started using AI to help him navigate difficult situations with colleagues, and here's what he said. Quote, I work for one of the country's largest nonprofits in tech, okay, brag, with a recently expanded staff of folks across a range of ages. To get to know one another, we all took a disc drive
assessment. You ever done your DISC profile, Kevin? No. DISC, capital D-I-S-C. Rick writes, quote, it's like a Myers-Briggs or other personality assessment. Anyway, after getting together to understand each other's DISC reading, we promptly forgot about it, which I think is
sort of the median outcome for taking any personality test. Okay, back to the quote. I thought that was a shame, so I built a GPT bot that was trained on a portion of everyone's results, leaving us with an AI that could answer questions on how to deal with difficult situations that arose with one another. Example, I need to tell Sally and Ned some bad news, but it actually impacts Ned a little worse, I think. What's my approach so as to minimize his anxiety and make Sally understand how this could actually be a good thing?
Boom, a series of suggested approaches based on their profiles and on how they might react to one another. It's a skeleton key for lazy bosses. What did you make of this? Wow. Well, number one, I want to learn more about the disc reading because I'll take any personality test. Yeah, you really like that kind of stuff. Well, I'm trying to get a high score. This is like astrology for people who went to business school. It absolutely is. And, you know, if listeners want to send us their disc readings, we'd love to take a look. But look, whatever the personality test was,
I do think it is interesting, this idea of I'm going to use AI in a kind of benign way to help me understand my coworkers, to kind of store that somewhere. And if I'm in a difficult situation, lean on the AI to help me a little bit, give me some tools for how I might work through this. Now, I am going to say here, Kevin, I actually don't think that AI is the most important part of the story.
What do you think the most important part is? The most important part is that this person, when confronted with a difficult situation, took a beat and stopped and thought about how to respond before he acted, right? You think about most of the workplace conflicts that arrive, people stew and they stew and then they see each other in the office and it all just kind of boils over, right? And people act from emotion. And the trick of it is,
is to try to separate yourself from the conflict enough to say, okay, what kind of human am I dealing with? What do I know triggers them? Is there a way that I can approach this where I might get a little bit better result? I think that's probably like 85% of the reason why this succeeds. Now, at the same time, if you have...
15 different coworkers and they all have very different personalities and you sort of have trouble, you know, keeping track of this person's drama and where's this person emotionally right now. Keeping some sort of AI system, I don't know. Maybe it could assist, but what did you think? Yeah, I like this use case of sort of
what you could call like sort of conflict simulation. Like you can, you can, you know, I, and I use AI for this sometimes too, not, not in the same way that our listener Rick did, but basically like if I have to have a hard conversation with someone or if I'm like, like, um, you know, I was, I was trying to help a friend sort of negotiate for a raise a few weeks ago. And I was trying to, I didn't get it by the way, but go on. You're,
your own boss. You can just give yourself a raise anytime you want. But I was basically, you know, I was being asked for my advice and I was a little unsure about how this specific situation would unfold. And so I did go to a chat bot and say, like, what advice should I give my friend? And it gave me some pretty good advice that I then relayed to my friend. So this kind of thing, I think AI can be very useful for. I also think that
there's an intriguing potential here and I want to propose a new product. Okay. Which is Slack simulation mode. Okay.
Because how many conflicts at workplaces around the world are basically sparked when someone posts something sort of off topic or controversial in Slack and it totally derails an entire team? I think there should be a simulation mode where before you derail a Slack conversation, you enter the simulation mode, you type whatever you're going to type,
and all of your synthetic coworkers respond in real time. And so you can get a sense of how mad you are going to make people by posting the thing. And then you toggle off synthetic mode and you go back into real Slack and you post your thing. - This is genius. I love this idea. I also just think it'd be fun for people who work on really small teams like I do to just, you know, have six or seven synthetic coworkers who are dropping funny memes into the chat. You know, I'd really enjoy that. - Alan, VP of sales. What do you think about this idea?
All right. I do want to do our disc readings after the show. All right. So next up, we got a bunch of responses from educators, teachers, even school administrators and principals who figured out ways to use AI in their jobs. And in particular, for one of the most painful and time-consuming tasks involved with being a teacher, which is writing evaluations. So here's an experience we heard about from one high school teacher.
Hi, Hard Fork team. This is James Deck, Director of Innovation at Franklin School in Jersey City, New Jersey. And I wanted to share how generative AI has transformed my work as a high school design and technology teacher.
Writing report card narratives used to take me 20 to 30 hours each semester. But now with the help of the OpenAI API, I've created a tool that extracts all the assessment data from our learning management system, anonymizes it, and generates a draft narrative for each student. Then I review and edit these drafts, reducing my total time spent to about two to five hours. I've shared this tool with my colleagues, and there's about 10 of us that have been using it. It's been a game changer for all of us.
This is so cool. I love this one. Do you like this one too? So I do with one caveat. So obviously there's a lot of work involved in education that is not directly related to teaching students in a classroom. I would put evaluations, college recommendations, things of that nature into this category. The work never stops for these teachers. Yes, they have so much busy work and paperwork involved with their jobs. And if AI can help cut down on that, I am all for it.
The caveat here is that I hope that James and other educators who are using AI to speed up the process of evaluating student work, of giving feedback to students on their report cards, I hope that they are still thinking as hard about those evaluations as they would have before. I hope that they are not just kind of taking the stock output of the AI language model and giving it to their students and saying, here is the assessment of how you did this semester. Because
those assessments can be very helpful for students in figuring out like what to work on, what they need to improve, giving them positive reinforcement or feedback when they're doing a good job. I think that loses a lot of its appeal and usefulness to the students if it's all or mostly being produced by AI. What do you think? I mean, that,
makes sense. My hope is that these narratives follow a pretty standardized structure, right? Like, ideally, you're really trying to coach these kids in a few areas, and the narrative reflects essentially how well they're doing in those cases. If this were the case where it's like, you know, write a lyrical 20-page assessment of these individual humans, like, on whatever rubric you want, then yeah, you're right. Something would be lost. But
But let's just say, if this teacher has, he was spending up to 30 hours a semester on these, that is almost a full week of work that he was doing in addition to everything else that he had going on. That seems like a really tough requirement to put on these teachers. So if he's able to get that down to two or five hours and lean on the AI to do work that it seems like can actually be automated decently well, that seems good to me. Yeah, if it's just sort of clerical paperwork,
happening here through the AI, I think that is totally fine as long as there is actual substantive person-to-person feedback and assessment and guidance taking place in the classroom with these students. Yeah. Now, I will say a sort of similar story that did make me more concerned was, did you see this thing about Texas using computers to grade written answers on their standardized tests this year? No. What happened? Well, so they changed some of their standardized tests to
involve more written answers, so like less multiple choice, more kind of open-ended. And they are using some sort of AI system to do the initial scoring of these answers. They apparently will still send a quarter of responses to humans to be rescored as a kind of quality control, but the AI is still going to be taking a first pass. And this is one where I say, I actually am nervous because it just feels like in two or three years, we will find out that this system was biased against students who are minorities, basically.
Interesting. So your worry is that the AI just isn't up to the task of evaluating these student responses. Yeah, yeah, that it's going to have certain biases that are programmed into it and that that gets reflected in the grades and there aren't that many humans to sort of review the answers and it winds up having sort of inequitable outcomes.
Yeah, I mean, I just kind of wonder what the status quo was there, because it's not as if standardized tests with sort of essay responses are being sort of, you know, carefully and thoughtfully reviewed by a team of expert educators. They're basically comparing them against model answers and figuring out, like, does it address this point and this point and this point? Does it have a thesis and supporting evidence and a conclusion? I think AI can take a first pass at that. But yeah, I would love to see humans kept in the loop on that kind of thing, especially if these are tests that are
that might determine where students are getting into college or whether they're tracked into gifted programs or not, like things that have a real measurable impact on their lives. All right. Well, so far, we've heard some fun and cool AI use cases, but we'd be remiss if we didn't include a few AI horror stories, Kevin. Yeah, let's see the blooper reel. All right.
Our first Nightmare AI story comes from a listener named Colin Barry, who has a regular work meeting where his writing is basically put to the test against ChatGPT, saying like, basically, is this better than what ChatGPT would have come up with? Here's what Colin said to us over email.
Quote,
Every week without fail, they copy and paste my assignment into ChatGPT in the meeting with me. They aren't any good at prompts, and I like to think my work is better, but with every new update, it inches closer. Here's what I want to say about this. This is a human rights violation. Whatever is happening in this workplace is specifically prohibited under the Geneva Convention, and this man needs to get a lawyer. I would scream in...
If I were sitting in a meeting and people are looking at my copy and they're like, all right, Colin, now let's just see what the AI...
i can do and then it's competing against me shut it all down colin you need a new job yeah yeah you need to fire this client this is basically the the digital white collar equivalent of like the john henry story where you're like racing against the machine and uh it is truly uh dystopian that that you would be constantly compared to an ai chatbot that gets better with every new iteration so it's like what what
is Colin supposed to take from, you know, seeing the AI get better? It's like, you know, I'm sure he is being as creative as he can be, you know, given the terms of the assignment. I think just seeing your work next to an AI isn't going to suddenly, you know, inspire you to become Shakespeare. Yeah, I think the proper response to this sort of behavior is to write back to whoever is pasting in these chat GPT answers and
and say, hey, it looks like you're finding a lot of utility out of ChatGPT. Why don't you do that? I'm going to go work with someone who actually wants a human being to do this. For real. Get out of here. Get out. Knock it off. Yeah.
Well, Kevin, if we might do one more, this feeling that you're going to hear in this call reflected a feeling that came up over and over again, which is that so many workers these days are turning to these AI tools because the demands of their jobs just keep increasing and increasing, and they are not getting any more help, right? So a classic story of capitalism, and I want to play this one from a listener named Emma Fairchild Barge.
Let's listen. Hey, Kevin and Casey. I'm a senior manager at a technology company living in New York City. What I see in my daily life and across my peer set too is a massive feeling of workplace overload and being asked to do too much with too little.
The context across many industries here is that most professionals have experienced significant layoffs without backfill. And then we add to that that the typical corporate calendar is filled with six to seven hours of meetings daily, the result of which is often a long to-do list of tasks that come out of the meeting. The sheer volume of work expected of employees is astonishing and completely unrealistic.
What I find is that AI can be a lifeline to kickstart projects when workers are mentally drained and facing incredibly tight deadlines. I think what we'll see in the future is a better use of AI to right-size the workload of professionals struggling to stay afloat. What do you think about that?
I mean, it makes me really sad. You know, I mean, I have to say, I feel like I've been lucky in my life that I have not had managers who truly gave me more than I can handle for any extended period of time. But of course- Yeah, you work like two days a week. It's like, you're basically on European schedule here. I mean, I work a lot, but it's like all on my own terms.
Right.
My fear is that as their bosses catch on, AI actually starts to become an excuse to give them more work. Yes. Right? It's like you actually should be more productive because now I know that you're doing the first 40% of every assignment in something like ChatGPT. What do you think? Yeah, I think the take that our listener has here is the optimistic take, which is that this technology is going to help stressed out, burned out workers sort of survive
speed through their excessive workloads and basically come to a more balanced place at work. I think the pessimistic take is that what you're saying is true, that
As soon as this technology becomes truly useful for improving efficiency, managers, bosses, people who run companies are going to say, well, if you had 10 meetings before, now we're going to give you 20 meetings because you're going to use AI to get a bunch of stuff done faster. And they're going to sort of raise their expectations. And actually, I was thinking about this because I was reading a study recently that was done by Accenture, the consulting firm, where they basically surveyed workers and bosses about their feelings about
generative AI in the workplace. And one of the surprising things that stuck out to me was that there was a huge disparity in how workers and bosses responded to prompts like, I am concerned that AI may increase my stress and burnout. So 60% of workers in this survey said that that was their concern, that AI was going to make them more stressed and burned out. Only 37% of bosses felt that way.
So you have a situation now, I think, where a lot of managers and people who run companies, people who get excited about AI as a productivity enhancer are thinking this is going to make everyone more productive. They'll be less stressed out. They'll be less overloaded. And workers, meanwhile, a lot of them are saying to themselves, wait a minute, that just means that my boss is going to expect more of me. And that's actually going to increase my stress rather than decreasing it.
Well, I'll be very curious to see how this one plays out. And I hope that we continue talking to our listeners about those moments at the workplace. Like, you know, is there going to be a moment when you first realize that your boss does actually expect more now that AI chatbots exist? We would love to hear those stories. Yeah. And it speaks to one of these central themes that kind of emerged out of not just sort of the literature so far of what we know about generative AI at work, but just our listener stories, which is that
There are sort of two trust problems with AI at work right now. The first is trust in the technology, right? There are a lot of reasons that people still don't trust this technology. It gets things wrong. It makes things up. It causes embarrassing errors in sort of client-facing work.
But I think there's also a trust problem between people that is sort of being illuminated and exposed by this technology, which is that workers don't trust that if they use this stuff, their bosses will consider them valuable and, you know, allow them to be more productive without just piling more work on.
Bosses don't trust workers to not cut corners and sort of, you know, automate themselves into a very easy job. And so there's just kind of this mutual unease and distrust that I think is happening at a lot of workplaces right now spurred by this technology. That's really interesting. What do you think like workplaces should do about that?
So one thing I think could be helpful just as a first step is just to bring this discussion out in the open, right? I think at a lot of companies right now, these discussions are happening sort of in private sort of side channels around the water cooler. Disappearing signal chats. Yes, but I think there's a real value in just sort of putting this all on the table saying, hey, this technology exists.
what is it good for, what is it not good for, how can we use it, and how should our jobs change as a result? And I think involving not just senior managers, but everyone at a company in that discussion is a really important piece. I think right now a lot of companies are trying to kind of implement these top-down rules for like how we are going to use AI in this workplace.
And I think it should be a more organic bottom-up process where everyone from individual contributors all the way up to the CEO feels like they have a voice in that conversation. Very interesting. So those were incredible and illuminating stories. Thank you to all of the forkers who sent in examples of AI at work. We love our listeners and their very thoughtful submissions. Yeah, those were really great. When we come back,
Let's bring a little green energy to the podcast. We'll talk to legendary YouTuber Hank Green about what it's like to be a creator in 2024. This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.
Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai. I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret.
Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.
It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.
Casey, I'm very excited for our guest today. Oh, me too. This is someone we've been wanting to get on the show for a long time. Really, I would consider him a friend of the show, even though he's never been on before. But he did email in a question one time. It's true. So today we're talking with Hank Green. Hank is, I would say, a legend of online content creation. I don't know of really any other way to describe him. He has been making videos on the internet since 2007 with his brother, John, and they have created this kind of whole
online empire. They've got educational video channels, including SciShow and Crash Course. He also is one of the creators of VidCon, the big annual convention for online content creators. And I would say he's been sort of a driving force behind the rise of the creator economy, this world of people trying to make a living by putting stuff on the internet.
That's right. And I would say in recent years, he has become a sort of elder statesman of the creators, a very smart, reasonable voice on issues affecting creators. And so whenever I want to know how changes in platforms and algorithms are affecting the people who make their living on them, Hank is always the first person I want to hear from. Totally. So he's someone we've wanted to talk to on the show for a while. And there's so much happening in the world of online content.
and media creation right now. We've got the looming ban of TikTok, which may happen at some point in the next year. We've got the rise of TikTok competitors, YouTube Shorts, Instagram Reels. We've got sort of the larger platform changes and kind of the splintering of social media brought about by kind of the decline of Twitter, now X. And,
And we've also got a YouTube channel that we need to grow, Kevin. It's true. So now seems like the perfect time to bring him in to talk about the many changes facing online creators in 2024 and what he thinks the future might hold. All right, let's bring in Hank. Hank Green, welcome to Hard Fork. Hey, thanks. Hi, Hank.
Hank, you have a big presence on many platforms, but one of them is TikTok, where you have 8 million followers. And as a very popular TikTok creator, I have to know, how are you feeling about the prospect of it potentially going away?
Well, in addition to being a popular TikTok creator, I am also a lot of other things, including like a human on Earth at a weird moment in history. So I am maybe unsurprisingly feeling very conflicted. I feel very conflicted. I feel like I'm glad I'm not in charge of this choice is one of my big sensations. I wish you were in charge of this choice. I think it would be a better choice if you were in charge of it.
I would definitely do, do it different. Yeah. You've been very strategic about sort of building a bunch of different platforms for yourself, such that if any one of them sort of disappears or changes in some way that makes it harder for you to like make money from there, you've got other options, but a lot of TikTokers are just TikTokers. And so I'm wondering, like, have you had other TikTokers sort of come to you and say like, how do I diversify? How do I build these lifeboats for myself? And what do you tell them?
Oh, for sure. I mean, I tell them you have to diversify and TikTok is terrible at letting you diversify. What do you mean? The algorithm is so sensitive to any sign that people might like one piece of content a little bit less than another, that if you make a piece of content that has a call to action in it, that's like, come sign up for my newsletter or come follow my new podcast.
The algorithm immediately notices that people are a little bit less engaged in that piece of content than a normal piece of content that you make. And on YouTube, you might go from getting 100,000 views to 60,000 views on a piece of content like that. On TikTok, you'll go from getting 300,000 because base levels are higher on TikTok. And then it'll go from 300,000 views to like 3,000 views. Wow.
And it notices that immediately. Like this is like what these swipeable platforms are so good at is noticing the difference between something that people love a lot and that people love a little bit. And I think this explains this phenomenon at TikTok where I see somebody do some like little piece of shtick and it's really funny and I like it. And then every time I see them ever after, it's just the same shtick. And it feels like it does get kind of trapped in a box, which I don't really like.
No, yeah. I mean, like one of the things that I say to creators is like, you have to understand that you make two kinds of content. You make one kind of content to reach new audience and you make one kind of content to connect with existing audience. And every time you make that second kind of content, it's going to feel like you're failing because it's not going to have the same reach. But it's succeeding because it's doing something that is way more valuable to you long term, which is actually building an audience and a relationship with that audience rather than just
rather than just making something viral that will come across your feed and you'll forget about it. - Yeah. - Right. So I wanna ask about business, Hank, because in 2022, you made a great video about the TikTok Creator Fund, which was the main way that it was paying creators at the time. And the gist was that as the platform grew, the pot of money it was paying out to creators stayed the same.
Meaning that people like you had to work harder and harder for less money. That's different from something like YouTube, where you get to keep 55% of the ad revenue on your channel. So my question is, has the money you made on TikTok changed much since then? And how do you rate TikTok overall as a place to grow your business? You know, I'm so sorry to my contacts at TikTok for what I'm about to say, but...
But like, if you're listening, it'd be nice if I knew how much money I made because I have no idea. Really? It hasn't updated since January. It's broken. It thinks I'm British. It's paying me in pounds. Like the thing to remember about TikTok is that like, it's not...
super together. Like, sometimes people are like, I think that TikTok is trying to do X, Y, and Z, and I'm like, oh my gosh, if they are, nobody knows about it. Like, it's very messy. So you literally do not know how much money you are making on TikTok? I have not. My dashboard hasn't updated since January. Wow. This sounds like it could be a classic Chinese Communist Party psyop, Kevin. Yeah.
I mean, one of my takeaways from what you're saying, Hank, is like sort of however you feel about like the geopolitics of a ban, they sure don't seem to be treating their creators all that well. And so as a result, what should be their biggest and loudest constituency has actually been a little oddly quiet, right? If you don't even know how much money you're making from the app, you're probably not going to march in the streets demanding that they let it continue to exist. Yeah.
Yeah. If you went on Instagram the day that they posted, they're like, you know, you have to call your Congress. Like there's lots of creators in the comments there being like,
So now you want us to do stuff for you and you're not doing anything for us? You know, like giving us no signal for what's happened to my account. And oftentimes it's things like people feel like they've been shadow banned and actually their audience just got less interested in them because TikTok audiences move on extremely quickly from everything. But I will say, plenty of people are very upset and
I don't think that we understand how weird this is. What do you mean? Like, we don't understand properly that we live on social media platforms. Like, what is happening if not living while you're on TikTok? You're living. You are alive. You're like in those moments. Your attention is focused on this thing. And this is a part of your life. It's a part of your social life.
And the idea of it going away is a little mind boggling. It's like the, like imagining your town stopping existing, but then the idea of the government being like, okay, your town doesn't get to exist anymore. It's like, like the people who are saying that this isn't a first amendment problem are so are wrong. Like it is, it is about speech, but like also there's a problem.
Yeah, but I don't think it's a specific TikTok problem. I think that TikTok might be a little bit worse at it right now than YouTube and Facebook because YouTube and Facebook have been through it. But like we can see
very clearly that being on TikTok changes how you see the world. We all know this. This is one of the things that people say is a good thing about it. They're like, this has really informed how I see the world. It's given me a view on things that I never would have had otherwise. And that's totally true. And the problem is that both TikTok
The views are determined by the algorithm, but also the kind of content that gets created is determined by the algorithm. The content that gets rewarded is the only content that gets created. Content that does not get views doesn't just not get views. It does not exist. Yeah. So, like, as a creator, I just know this, like, functionally to be true. Yeah. I'm curious, like...
if you believe as I do that sort of the medium is the message, the old sort of Marshall McLuhan formulation,
I'm curious what you think the message of TikTok has been. To me, it seems like every time there's a successful new platform, it kind of teaches us something about humanity and our psychology. I feel like the lesson of YouTube was like, your video doesn't have to be slick and professionally produced for people to want to watch it. In fact, maybe it's better if it's not. And maybe the lesson of Twitter, sort of original Twitter, was like, a lot of thoughts can be communicated effectively in 140 characters or less.
If TikTok does go away, what do you think it should teach us? What was the lesson of TikTok? I mean, the first thought I had is that the lesson that TikTok has taught us is that the only currency is attention. But I also think that Twitter taught us that.
You know, like, it's not like anybody made a bunch of money off their tweets. And yet we spend a huge amount of time giving free labor to the owners of these platforms because there's something there, you know? And then I think also that, like, the lesson of TikTok is that, like, culture can happen anywhere.
Very fast. Like the speed of culture is in a lot of ways the speed of connections between humans and that like that, the rapidity of the cultural creation on TikTok is unmatched.
It's wild. Remember when we were kids and the Macarena was a thing for six months? Every three years we might get a new dance. And now TikTok is like, we're going to give you 18 a day.
Totally. It does feel like it has increased the overall sort of cultural metabolism of the world. Oh my God, it's so fast. It's sped it up. So exhausting. But I think you hit on something else important, which it created the conditions for culture to almost spontaneously self-organize. My favorite moments on TikTok are when folks came together to do that Ratatouille musical, and it was just people sort of guessing, what song might you write if you were writing a Ratatouille musical?
Somehow Disney actually let them stage a version of it during the pandemic as a sort of streaming-only thing, which I watched, and it was amazing. But we should say this was like a software thing with features like Stitch and Duets. They sort of invited people, if you see something on the platform, come in and remix it. And to me, that was sort of the lesson. And then it was also –
It's so like the thing that the algorithm is good at is identifying when something is a part of that moment, which I don't even know how they do. Because like, how do you tell one like a Ratatouille musical versus somebody just singing Hamilton? Like, yeah, like it's a computer program. It doesn't know, but it was able to do it. If you were starting today, which platform would you start your empire on?
I'd probably still start on TikTok. You would. It's just the, the, the great, like TikTok is bad at everything except discovery. You know, it's, it's so good.
It's so good. Because there are so many chances. Like, when you're scrolling TikTok, occasionally you'll get one of those videos that has, like, three views. Yeah. Because it's giving it a chance. Yeah. And YouTube doesn't have enough human, like, time when you're, like, spending 10 minutes watching a video to, like, give a lot of content a chance. So it has to find another way to get a chance, whether that's, you know, getting posted on Reddit or getting noticed by someone. Or, you know, I try to watch, like, a lot of YouTube.
YouTube videos from smaller creators, but like I'm not watching YouTube videos from people who have one view, you know? - Yeah. Hank, I know you are a student of media and I know this because we've talked about, you know, our industry and I've also heard you talking about journalism and where all this is headed.
I encountered a term this week for the very first time, which was ginfluencer, which sort of made me recoil. But it was in the context of someone asking me about this apparently new thing where journalists are being told that they have to be ginfluencers. So just drinking a lot of...
Yes. Yes. It's when you have too much beef eater, then you become a influencer. No, but they're basically saying, look, journalists are not this sort of sanctified class anymore. You kind of have to get out here in the creator economy and like make your stuff entertaining and put yourself on TikTok. And like that is part of your job description now. Do you think that is true?
If it is not obvious to the two of you that you're influencers, I will stand up and leave the room. What do you mean? You're like, you guys, you like, this is the thing. Influencer is a horrible word. We can all agree on that. Who created the word influencer? They were very influential, whoever they were. Yes, they themselves. They must have been influencers. Influencer is the word that marketing people use for creators.
Because influencing is what creators do for marketers. But I will say, obviously, you guys are content creators on the internet. You're internet content. Just picture Twitter two years ago. It's all journalists talking to each other and influencing each other. That's what it was.
I will say, to me, there's like two ways of not caring if you're a journalist. Like there's the not caring that's like, I just wrote an incredible investigation that's going to change the world. I'm going to drop it and I'll let like my institution promote it and I'm already on to the next thing. And that can be a big flex and congratulations. And then there's the kind of not caring that's like,
I just don't actually care about my job that much. And so I'm going to do the work and I'm going to put it online. And if somebody else wants to read it, that's fine. But, you know, I'm just not going to be too invested. And, you know, again, I think that's fine. But I think there's like a low ceiling on that kind of career in this day and age. Yeah. Yeah.
I'm thinking in particular about this exchange you had with Nilay Patel on his podcast recently about sort of whether the future of media will belong to sort of individual creators or whether it's going to be big institutions that sort of profit from the kind of loss of trust or sort of the changing media landscape. And you were sort of on the side of like, it's all going to be individual creators. People trust individuals now. They don't trust institutions. And Nilay was like, no, no, no, that's all like,
a fake story that you've been sold by platforms who want you to believe that, um, that legacy media institutions don't matter anymore. Uh, so that they'll buy more ads on YouTube. And it just occurs to me that like Casey and I have basically made opposite bets on that argument that like, he is now like a solo entrepreneur, like media, a brand, and I get a paycheck from a big legacy media company. Uh, so I guess my question to you, Hank is like, which of us is smarter? Yeah.
So that's actually a separate question. I do have an answer to that one if you'd like me to answer it. Please. I think Nila is wrong. I thought about it a lot. It might very well be that it is part of the structure of the social internet, but I don't think it's a lie that's being sold to me.
platforms, I think it's a choice that people are making when they are choosing what to watch. And I think it's bad. So I think that Casey's right and terrible. I think you're correct, but also a bad person. I'm always saying this. No, I mean, I've been...
basically agree with you. And again, look, I mean, I think that you can have an absolutely great career working for a big institution as you are, Kevin. I also thought that like, I really wanted to try making a go of it on my own. And it's been super cool. And I wish we had more big, great journalistic institutions in this country. I wish we had more big institutions in general. I wish there were more trust in institutions. One of the reasons I write is so that I can write about institutions when occasionally they do the right
thing, right? In the hopes that maybe it might build some trust in them. So I think that even though we all sort of sits in different parts of this landscape, we maybe agree with each other more than you might think.
Oh, for sure. I also think that as we move into sort of a more potentially with more content being created by non-humans world, that the systems that build trust are going to be very, very important. But that is done both through institutions and through individuals. Like there's a bunch of like, I think,
Bad and incorrect mechanisms that that like weigh down on institutions more than they weigh down on individuals, but also you can tear down an individual permanently more easily than you can tear down a institution.
institution permanently. Good point. You teed up my next question very nicely, which was about AI. I'm very curious how you feel about the advance of AI as it pertains to what you do for a living. Are you using AI in your daily life? Do you feel like five years from now, most or all of what you do will be possible to do with AI? Yeah.
I just want listeners at home to know that they told me I was going to come talk about TikTok. And then they hooked me in. And now we're talking about AI. It was a bit switch. I did not prepare for this. Hank, you brought it up! I...
I'm not entirely sure. And I'd be curious to hear y'all's thoughts about this, that it is okay that we accepted that this future so easily, that content that was trained on the creations of all of us is kind of okay to like, let that have happened and then let it replace us, um, to, to some extent. Uh,
I so like part of me is like, is it even OK to use it all? I feel weirder about image generation than I do about text generation, but I don't know why. But also I do use it like I like just like how I have always felt very sort of conflicted about being on TikTok. I'm like, but I'm a guy now. I will totally. What do you use it for?
I use it for starting out when I'm confused about something. If I come across something in a paper that I don't understand, so instead of looking up each term individually, you just copy the paragraph and you're like, "What the heck does this mean?" Then it like, "Oh, so that word means that and this acronym means this." That's just a helpful quick tool to get into a topic that is complicated faster.
And then here's my weirdest way. And this is going to sound ridiculous. I use it to fact check, which I don't put that in without putting the disclaimer in. It will catch...
that are incorrect. It will catch things that are simplifications. It will catch things that are correct that it thinks are incorrect. And it will miss things that are incorrect. So it will do all the different things, but it will highlight moments in my scripts where I'm like, you're right, I shouldn't simplify that much. And then I will send it to an actual human fact checker
because I do science videos. So if it's like a SciShow script, that has to go through a human fact check. But it's less work for that fact checker because I pre-identified some of the things that they were going to come back to me with. Now, what kind of prompt are you giving? Are you saying like fact check this or how are you getting it right? Fact check colon paste the script. Got it. Wow. This has been a pretty gloomy conversation about the present and maybe near future of the internet. But one thing I've always...
liked about you and your work, Hank, is that you do sort of find these pockets of delight and magic in the internet. I find fewer and fewer of those moments these days, but I'm curious, Hank, are there parts of the internet that still feel like that to you? Just people like making really good stuff, putting it out there and getting discovered? Yeah, of course. I mean, it's like, it's everywhere constantly. And I think that if you... I don't know.
I don't know. Like there are weird human nature problems where we get so much more energized by finding out that we are better than other people and that other people are terrible than anything else. And that is just like, it really tears at the fabric. But I think that what...
will ultimately be the solution to this problem will just be us getting better just in the same way that like old clickbait headlines don't work anymore. I think that like people will get more aware of when someone is just trying to find the wound you have in the body of America and wiggle their fingers around in it. Like it'll just be more obvious that it's happening. It is so obvious to me now when it's happening that
But I understand that I have a different relationship with media and I have been in it for a really long time. But my hope is that we solve these problems not through infringing upon the speech of Americans and saying, you have this amazing platform that you have used to build a community and connect and we're just going to evaporate that or just let it not be in the app store and peter out over time.
But we solve that problem through like being better at being people, which is like how we have solved long term all of the other problems, which I think we're capable of. I think we need to work toward a collective point of view on how algorithms should work. Right now, the only sort of commonly held view that I hear about algorithms is I should be getting more views than I am. That's the sort of one thing that everyone can agree on.
But when it comes to like, what should be the content of these feeds? How should it sort stuff? How manipulative is it allowed to be? How long should people look at it? How addictive should it be engineered to be? This is just stuff that we have no collective understanding on. And so what we're left with is this essentially superstitious feeling that whatever's going on in these algorithms sure doesn't feel good. And so maybe let's get rid of the Chinese one.
And I think we will just look back and feel like that was not a very sophisticated way of addressing the underlying issue. Yeah. Yeah. I mean, it's sometimes it feels like we're monkeys with guns and we like are like, this thing's loud. And then you're like, why does my foot hurt? You know, you don't even know why you got hurt by your new toy because it's like so outside of your previous experience. You know, right. Like what you're talking about is essentially like evolution.
Like we are the finches. Our beaks are not long enough to like get down in the seed pods and we just need to like spend a couple generations like developing longer beaks. Yeah. And I think that's totally possible, but not if it keeps changing so fast. Right. Hank Green, thanks so much for coming. Thank you, Hank. Yeah. Thank you.
When we come back, a wild story about a deepfake scandal that rocked a Maryland high school. This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.
Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai. All right, Casey, we have one more story we have to talk about today because it has been on my mind all week. And that is the story of what I would call Main Street deepfakes. Deepfakes.
Deep fakes, these AI-generated synthetic fakes, are a problem. We've known this. We've talked about it on the show before, usually in the context of kind of big national issues, elections, protests, things of that nature. Somebody created some synthetic audio of President Biden. Yes, exactly. Exactly.
So that is a problem, and I think lots of people are rightfully concerned about it. But we're also seeing a bunch of stories kind of trickle up from local media outlets about things that are happening in much smaller sort of settings where AI deepfakes are creating major problems. Yeah, this technology is very easy to use. It is almost freely available to use on the internet, and it is finding its way into more and more hands. And this is a story about that.
So we've seen over the past few weeks and months a number of stories like this. There was a story about deepfake nude images being created by and of high school kids that are going around in high schools. There was a similar story out of Spain that caused a big outcry. And then there was the story that happened last week that my colleague Natasha Singer at The Times covered, really wild story about something that happened at a high school in Baltimore County, Maryland.
All right, so walk us through it. So the story takes place at a high school called Pikesville High School in Baltimore County, Maryland. And in mid-January of this year, there were some mysterious audio clips that started going around this high school. These clips are purportedly of the school's principal, Eric Eiswert, who is making a kind of bunch of inflammatory racist and anti-Semitic comments. He talks about...
ungrateful black kids in the school who can't, quote, test their way out of a paper bag. At one point, the voice says that if he gets one more complaint from a Jewish person in the community, he's going to, quote, join the other side. So very inflammatory statements. And the audio clips kind of sound like he was secretly recorded making these statements. And I'm
A lot of people in the community are outraged when these clips come out. They think, you know, this stuff is real. They're sort of harassing and threatening the principal and his family. He was actually temporarily removed as the school's principal and just triggered a huge wave of backlash, as you would expect. Absolutely. If a school principal had been caught on tape saying these kinds of offensive things.
But then people start investigating. The principal says, this wasn't me. I never said these things. The police start getting involved. They send the audio recordings to experts and got access to some emails.
And they find that actually this has a much stranger origin story. Truly. They find that the athletic director at Pikesville High School, a man named Dazon Darian, was having a dispute with Principal Eiswert over some performance issues.
The principal had opened an investigation into Darian over the alleged mishandling of school funds. Apparently he was making improper payments to an assistant girls soccer coach who was also his roommate. And who apparently was hired but never did the job. Yes. Which is a dream job.
Right. So basically there was a conflict between the athletic director and the principal, and they conclude that the athletic director, Dazon Darian, made this deepfake audio recording as retaliation for an investigation. Yeah, I mean, I have to say, this feels like a real gift to the police department. They like go and start snooping around investigating. It's like,
Is this principal, is he having a huge dispute with anyone in the school? I said, oh yeah, well there's the athletic director who the principal had recently caught funneling school funds to his roommate. So I'm sure that got the police's attention in a hurry. Yes, so the police, after this investigation, they conclude that it was Darian who had produced this recording and sent it to two other teachers. One of those teachers then sent it to a student,
And it sort of made its way around not only the school, but also local media outlets. The NAACP was also sent copies of this recording.
So after investigating this whole thing, the Baltimore County Police put out a warrant for Darian's arrest. He was arrested the next day at BWI Airport after he was stopped for having a gun and security at the airport saw that he had an open warrant. He was charged with disrupting school activities and some related charges like stalking the principal. And the school superintendent said that officials are recommending that he be fired. I like that.
This was the crime they got him on after all. This is disrupting school activities. Like that seems like such a, like small potatoes given the gravity of this. Um, you know, I will also say, Kevin, you left out one of my favorite details of this story, which is that, uh,
once the audio started getting around, one of these teachers sent it to a student who, quote, she knew would rapidly spread the message around various social media outlets and throughout the school. So whoever this anonymous gossip was at Pikesville High School, you may have a future in journalism. Although, of course, we will want to sort of help you vet your evidence. Right. So very serious story, very sort of disruptive use of AI deepfakes to sort of
wreck the life of a principal who it appears did nothing wrong here. And I wanted to talk about this on the show today because I think we are at the precipice of a sea change in the way that AI is used in people's ordinary lives and a sea change in how we perceive reality.
Things that appear shocking or surprising and the kind of suspicion that will now have to be attached to any kind of evidence, no matter how convincing it is.
Yeah, I think, you know, something we've talked about before on this show is that with the synthetic media that typically gets written about, it is of these major political figures. It's Trump. It's Biden. And there's a huge journalistic infrastructure that can go in, can investigate, can say, hey, this was real. This was fake and sort of help everyone adjust their settings into determining what is reality.
That's going to be so much harder to do at the local level where journalism has been gutted. There are few sources of trusted authority in these communities who can weigh in and make these determinations. And so this Dazed on Darien appears to have been exploiting that and saying, I think I can get away with this because who is actually going to come in and be able to tell that the principal didn't really say this?
And we should say like the recording itself is very realistic. Like it was actually quite convincing. So I think we should just listen to it, not because I'm like excited about spreading these, you know, fake recordings around, but I just think it's like useful for people to understand and hear for themselves how good this technology has gotten. Yeah. So let's hear a short clip of that audio. I'm the principal here, me and only me.
You know, I seriously don't understand why I have to constantly put up with these dumbasses here every day.
So, Casey, when you hear that, if you did not know anything about this story, would your suspicion be that this is fake? No, not at all. I mean, I think, you know, what gets me, Kevin, is that there is so much emotion in the voice that we just heard. And I think up until now, that has been the telltale sign for me is there's a certain flatness in the affect of the voice that lets me say, okay, I'm hearing a computer.
that fake voice was angry. Also, the rhythm of the words was such that he spoke faster in certain moments and slower in others in a way that also sort of mimics human speech. So for those reasons, I did think that it sounded quite realistic. What did you think? Yeah, I thought so too. I mean, this technology has gotten quite good quite fast. Just a couple years ago, even the best synthetic AI voice software
on the market still sounded like a robot. I mean, it was flat, it was sort of affectless, it would sort of put the emphasis on the wrong syllables of words sometimes. I believe it's pronounced syllable. And it was quite labor-intensive to make a deepfake audio. You needed to feed it hours of someone's voice samples in order to be able to sort of clone their voice and make it say stuff that they didn't actually say. Now, some of these technologies allow you to do things
synthetic voice creation with as little as a minute or two of someone's actual voice being recorded. There are now technologies that can insert, you know, ums and ahs and these sort of stumbles that people make as part of our natural speech cadence. And in this case,
It appears that there was actually background noise that was sort of inserted into this audio file to make it sound more like something that someone had caught on a sort of hidden recorder in their pocket or something. Yeah, there was a kind of tinniness to the audio that also made it just kind of give you that impression of this was surreptitiously recorded.
So when the investigators sent this audio to AI experts, they were able to kind of flag some things that they thought made it sound more synthetic. They talked about there were signs of sort of after-the-fact human editing. But I would say that this, for me, was a good reminder of just how convincing these fakes are and how little money and technical expertise you need to make them. I guess it raises for me the question, Kevin, of like,
as you start to hear these little controversies in your community, oh, here's this snippet of audio. What are the signs that we're looking for? You know, how do we know when to react and when not to react? Yeah.
I think this is really tricky because a few years ago, I would have told you, you know, there are a few basic indicators that something is being AI generated, right? If the voice sounds kind of flat, if it has kind of this like, these like sort of, I don't know, computery effects on it, you might hear something and think, oh, that's not real. But I think this technology has gotten better to the point that most people who are not experts would not be able to tell the difference. So I don't know if...
If you have tips for listeners who may be worried that something like this could happen in their own community, what would they be? Here's the message that I would give. And this is a tip that is taken from essentially how to avoid falling for clickbait and rage bait online. You know, I think sort of during the heyday of people sharing news on social media, a lot of times you would see something that would produce this really strong emotional reaction and you'd share it maybe without even reading the article, certainly without doing any work to tell whether it was true or not.
And in those cases, the advice that really worked for me was, number one, ask yourself two questions when you see a story like this. One is, is this story telling me something I want to hear? Is this about something I'm inclined to believe or want to believe? If so, that's the first thing that should trigger your skepticism, right? Before you share, you want to do a little bit more digging.
And then the number two thing to look for is, is this producing a very strong emotional reaction in me? Do I find myself really angry? Am I really upset? That should be the second sign of like, okay, maybe somebody is trying to get me to feel this way. If you can notice those two feelings in yourself before you react to a piece of media, you're going to be in a better position to understand whether somebody might be trying to mislead you. Yeah, I would totally agree with that. I also think like context is very important. You know,
If some recordings or video or audio emerge of someone in your community saying outrageous and offensive things, is this someone who has a history of saying outrageous and offensive things? Does it make sense for this person to be sort of recorded saying these things? Or is this kind of a departure from sort of how people have heard them speak publicly and privately before? Yeah.
But I would say looking to the actual media is sort of a sucker's game because this technology, as we've talked about, is getting so much better and so much cheaper, so much harder to tell apart from real media. I think that ultimately we just have to kind of dial up our overall skepticism about.
about the sort of media that we encounter on the internet or in our communities. Which does have this really unfortunate effect, Kevin, which is that when people do say really horrible, explosive things and then want to deny them, in many cases, they are going to now get the benefit of the doubt. This is a phenomenon that has a great name, the liar's dividend, right? And the liar's dividend is now that in a world where deepfakes exist,
it is easier for liars to get away with lying. So that I think is really unfortunate. The other thing that I'm just taking away from this is, man, do we need local journalism in this country? You know, we need, in fact, a lot of what we know about this story came from a great nonprofit called the Baltimore Banner, which is a relatively new organization in Baltimore that's trying to pick up the banner from the Baltimore Sun, a once great newspaper that is, you know, in some pretty rough times right now.
And because the banner existed, we were able to get some good boots on the ground, people who understand this community, people who could talk to people who knew and could give us some information that we can trust. Not every city has a Baltimore banner. And so when we think about what is the solution to this deepfakes problem, it actually is in part a how do we solve a local journalism question. I agree with that. I also think there are some things that the tech companies could do to
prevent or at least to cut back on the uh the abuse of these tools for uh retaliation or retribution or revenge
Some of these platforms now require you to sort of prove that if you're trying to clone someone's voice, that you actually have that person's permission. And I can imagine a number of ways that you could attempt to do that. But I also think there are limits to that because we now also have kind of more open source voice synthesizing tools that are just not going to take those same precautions. Yeah.
Yeah, I think that makes sense. But I think there's even more that tech companies could do. On top of that, I think they could experiment with some of these audio watermarks, in the same way that we see watermarks in images. Can they put some tiny artifact inside the audio that can be revealed that it was made by one of these synthesizers? You can also just do something on the level of content moderation. If you're one of these voice synthesizer studios and somebody types in a sort of inflammatory paragraph to make a synthetic voice say it,
it's reasonable to me for the synthesizer to say, gosh, this doesn't really seem like a great use of our technology and maybe we'll let you do it, but maybe we'll actually make you tell us the reason for why you're synthesizing this. So at least we can have some sort of file. - Yeah, I totally agree with that. I also think that the other area that they should be really careful about is financial scams because this is the other big use that we are seeing already for these synthetic voice tools is that people are using them to basically
place fake ransom calls to people's friends and relatives, calling Casey's mom, pretending to be Casey, and I'm saying, you know, I've been taken hostage and I need you to wire, you know, $100,000 worth of Bitcoin to this address. And that kind of thing is not only plausible, but we've seen demonstrated examples of that happening. And so, yeah, if you're going into Eleven Labs or one of these other sort of synthetic voice tools,
and you are creating synthetic voices that are saying things about money, about sending money, about needing money, I think that should immediately raise a flag in the system, and they should probably not allow you to do that. By the way, I did text my mom a couple months ago about this exact issue, and I said, hey, by the
way. If you ever get a call and it sounds like me and I say I've been taken hostage, make sure you call me on the phone because it's almost certainly a scam. And I was so proud of my mom. She was like, oh yeah, I already saw like a segment about this on the news. I'm way ahead of you on this. So one thing that you generally can count on parents is to be aware of scams on the internet because they love to read about them. You're never going to sneak one by Sally Noonan. No, you're not. Yeah. No, I think this is a really important thing. I think people need to be having this conversation with their family members. Like,
there needs to be a secret passphrase or some question that you can sort of agree that you're going to ask before wiring money or sending money or doing anything involving, you know, a credit card number or a social security number. Like, you need to have a way of establishing with your loved one that it is actually you on the other end of that
call. I think this is a conversation that people need to be having with their friends and family right now. You know, this Mother's Day, when you call mom to tell her you love her, maybe just say, hey, what's our secret passphrase we can always use so that you know I've not been taken hostage? And that's going to warm mom's heart. Casey, what is our secret passphrase? If someone ever calls you about me being taken hostage, what should we do to demonstrate that it's actually the real thing? I mean, my initial answer was we should just say sandwich time.
I like it. Only you and I know that we like to eat sandwiches after the show. Now we've given that information away and we will have to come up with a new passphrase. Okay, we'll come up with a new one and we're not going to put it on the podcast. We refuse. This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.
Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us. Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Jen Poyan. We're fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley.
Please don't.