Home
cover of episode Help! My Boss Won’t Stop Using ChatGPT

Help! My Boss Won’t Stop Using ChatGPT

2023/7/14
logo of podcast Hard Fork

Hard Fork

Chapters

The discussion explores why chatbots like ChatGPT hallucinate and provide false information, discussing the lack of a concept of knowledge and the need for confidence indicators in AI responses.

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Hang on, I'm eating a snack here. No, that's good. Make sure you eat it right into the microphone. People love to hear that. Have you ever had these? What are they? I'm eating Russian cigarettes.

Wait, what? That doesn't sound good. Oh, these are good. Have you had these? They're just like little rolled up cookies. No. Oh. Well, I've had something like, I think, is it Pepperidge Farm that makes something called a pirouette that is similar to that? I got them in France and they're the only food I have in the house right now. So I'm eating Russian cigarettes at 1040 in the morning. Well, better than drinking at 1040 in the morning like you were last time. I'm a biohacker, Casey. I like to...

You know, I listened to the Huberman Lab podcast and he actually recommends eating a package of Russian cigarettes first thing in the morning. I'm Kevin Roos, the tech columnist of the New York Times. I'm Casey Newton from Platformer. And you're listening to Hard 4. This week, we open up the old mailbag and take your questions in the future. Casey, welcome back.

Hope you had a nice break. It was a fictional break. Yeah, that's a lie. We're pre-taping this before vacation. We're going to run it after vacation. So that was a lie that I just prompted you with. And by the way, if something crazy happened this week and you're like, why are they talking about this? It's like, hey, don't worry. We'll get to it, okay? There will be time. Yeah, we are time travelers from late June who are podcasting somehow in the second week of July. But here's the good news, Kevin.

is that some things in tech are truly timeless. And in that category, I would put some of the questions that our amazing listeners have sent us over the past few weeks. - We have the best, smartest, most interesting listeners, I swear. - Truly. - And I feel bad that we don't always have time to respond to all of them, but we do occasionally do these mailbag episodes. So today we're just gonna get through as many of these great questions as we can. You know, to be honest, we're doing this in part so we can take a vacation.

But doesn't everyone want a break from the news anyway? Completely. So we're going to do this in basically two halves. The first half is going to be questions for which there are curses.

These are like the kinds of questions that you might ask to your friend who's like really into tech. And we're going to answer those first. And then in the second half, we're going to tackle these thornier questions, questions that maybe don't have a clear answer, ethical dilemmas, the kinds of questions that we feature on our segment, Heartbreak.

hard questions. You know how like the New York Times crossword puzzle gets harder as the week goes on? That's kind of what this episode is going to be like. You know, we're going to give ourselves some easy ones. And then by the end, we are going to be wracked with existential panic over what to tell our listeners. Yes. So let's get started. Our first question comes from a listener named Madeline Winter. And she emailed us in response to a segment that we did on the show about a lawyer who had used chat GPT in court. Do you remember this segment?

Oh, I mean, one of the great lawyers of all time. We love this man. Folk hero. Folk hero. Folk hero and cautionary tale. This was the lawyer who got in trouble with a judge after he filed a brief that referenced a bunch of non-existent legal cases that it turned out he got from ChatGPT. And Madeline wrote to us to ask, basically, why does this happen? Why do chatbots hallucinate? She wrote,

why doesn't it just say it doesn't know? Why did it produce a bunch of links for case references for the lawyer that were completely made up? Why didn't it just confine itself to the real references it could find? Casey, what is the answer to this question? Yeah, so this is a great question. And I think the answer is that ChatGPT doesn't actually have a concept of knowledge, right? There are ways where it can give us the opinion

that it understands something, but in reality, it is just making a series of statistical guesses. And for that reason, it's not able to really say, "Hey, that's outside of my wheelhouse. "I don't have any information about that."

At the same time, I think Madeline is raising a really good point, Kevin, which is, shouldn't these models give us some sense of how confident they are in their answer when they tell us something? Do you think they could add something like that to maybe help us understand them better? Yes, and I think that's actually a great idea. There's this sort of like missing humility in these things when you ask them something. They just sort of go ahead and answer your question as if they are dead certain, which is, I think,

part of the reason why people like our chat GPT lawyer friend get fooled. There's no indication anywhere in the answer that it is not certain that the cases it's giving you are actually real. So I do think there should be some kind of confidence indicator. And I don't know what that would look like, but I think that is a really good idea. And I will also say that it can be tempting to overstate

the sort of extent of the hallucination problem with these models, right? They are not always making stuff up. Most of the time that I use them for something that has a factual answer, they actually do pretty well. And the later models do better at this. So I have noticed, for example, that when you ask GPT-4 something that has a correct answer and it doesn't know the answer, it will sometimes say like,

I don't have access to that information. I can't browse the internet, but here's what I can guess. And so I think that kind of like epistemic humility in a chatbot is a good feature. And I hope the companies making these things will build that into their models.

I do too. And you're right. They do sometimes say like, look, I just don't know about that. But I don't think they're doing it enough. You know, a lot of time using these models is kind of like, like you're driving somewhere with your dad. And you're like, do you know where you're going? And he's like, yeah, like, I know where I'm going. And then like 15 minutes later, you're still not sure that you're actually on the right freeway. Like that's kind of chat GPT. So hopefully that'll get better over time.

Yeah, and there are a number of companies that are trying to basically do what's called grounding of these AI language models, where instead of just having them kind of like make stuff up, you would actually tie them to a different model or a database or some kind of system that sort of gives it an authoritative body of knowledge to pull from. So, for example, Bing was

now that it has this AI technology in it, is fused to the Bing search index. So basically, you know, it's a little complicated, but the way that it works is that the AI model can sort of go and query the search engine and take the information it gets back and sort of format it into an answer that looks like it came from the chatbot. So there are people who think these hallucinations are just sort of baked into the way that AI language models work.

And I'm not that pessimistic. I think that these companies can and should try to fix these by introducing these more grounded states. All right, shall we read the next one? Yes, please. So this one comes to us from our listener, Linda, who lives in Edinburgh, Scotland. And she told us that she listens to the podcast every Monday while digitizing dried plant specimens at the Royal Botanical Garden, which is exactly how we hoped that people would listen to the podcast. So here's her question.

Hello, my name is Linda. My question is related to biodiversity and climate change, and it's how big is the carbon footprint of AI? And to be more specific, does the training of these models cost a lot of energy and heat? How does it compare, for instance, to Bitcoin mining or a flight? Kevin, what do we know about the environmental impact of AI? So this is an interesting question, and I think there's really not...

a super satisfying answer here because there's just not a lot of good data out there about these AI models and how much energy they consume. We do know a couple things. One is the training process for these AI models, the sort of first part where you feed all the data in to the neural network and have it kind of process it all is a very complicated

computing intensive process, right? These companies have thousands of these GPUs. Those consume lots of energy. But you really only have to do that once for every model. And then there's the sort of energy that it takes just to kind of serve all of the requests once the model has been trained. So there have been a couple attempts to sort of quantify the energy consumption of these various models.

There was a research paper published in 2021 that estimated that training GPT-3, the sort of precursor to chat GPT, took about 1.3 gigawatt hours of electricity, which just to put it in perspective, that's about as much as 120 U.S. homes would consume in a year. Now, remember, that's the training process. So that's not happening every time you ask a question of a chatbot. That's just a process that happens regularly.

once. You know, another reason why we might not have great data on this is just that the use of these chatbots has grown so much just in the past six months, right? So I would imagine that every week there's more people using these things than the week before and that just kind of makes it hard to get a sense of what the actual impact is. But you know, Kevin, I think another question that this sparks is

a lot of tech giants are working on this AI stuff and they have environment policies in place already. So what do we know about the Googles and Amazons and Microsofts of the world when it comes to carbon emissions?

Yeah, so this is another point in favor of the position that these AI models actually aren't destroying the environment like crypto mining or other forms of very processing intensive computing. You know, Google, which is constantly training and running AI models in its data centers, has made a pledge that it is committed.

carbon neutral. Microsoft, one of the big cloud providers is also carbon neutral, according to them. So I think it's safe to say that like, you don't have to feel like you are killing the environment every time you ask a question of chat GPT. It doesn't have zero impact on the environment, but it's also not something you need to be losing a lot of sleep over.

That's right. And by the way, if you're wondering what the environmental impact of hard fork is, we don't know, but we do ask you to plant a tree every time you listen to an episode. So if you do that, please send us a picture with hashtag hard fork.

Okay, here's another question about AI, which is by far the biggest subject that you all have emailed us about. And this question is interesting because it points out something that you and I kind of take for granted when we're talking about AI that I didn't even realize that people had questions about. And so I think it's worth kind of backing up and justifying this thing that we say all the time. This comes from actually a colleague of mine, Jake Lucas at the New York Times. Let's take a listen.

Hi, I'm Jake. I live in Brooklyn. And a question that's kept popping into my head the last few months is why or how our research is so sure that AI chatbots will continue to get better. Like whenever there's conversations about what any large language model powered program can do now, there's always kind of like a

an addendum that's like, but in the future, you know, it's going to be even better. And I want to know why we think that. Yeah, so it's a great question. And I think Jake is right that there is some uncertainty here. I think the reason that you and I are so optimistic is that we've just observed what's happened over the past six years or so since the invention of the transformer. And you can just look at the evolution of GPT.

right? GPT-2 is better than GPT-1. GPT-4 is better than GPT-3. And so we assume that GPT-5 will be better than GPT-4. The other thing is that as these models have released new versions, they've gotten radically better, right? It hasn't just been a 1% improvement in their quality. All of a sudden, we've got models that are doing well on the bar exam. So that's why there's so much excitement. Also, it seems that

We don't really need another technological breakthrough in order to radically improve these models, right? We're not waiting for some new technique to emerge. Instead, we can just throw more computing power at the techniques we've already developed. That's why GPT-4 is so much better than GPT-3, is they trained it on more parameters. So that's all the reason for optimism.

Now, at the same time, I've talked to CEOs who have said, I hope that we run into a roadblock here. I hope that we actually do trip over something where the systems that we've been using to improve our models to date stop working. So we all have a chance to catch our breath, maybe write some regulations and get comfortable. So even among these CEOs, it is not

a given to them that these models are going to keep improving exponentially forever. But when you look at the recent history they have, so that's where I'm at it, Kevin, where are you? Right. Well, a thing that a lot of AI researchers talk about are these things called scaling laws. And this comes from actually a paper that was released by OpenAI back in 2020 called

scaling laws for neural language models. And they found that there is a very, not only regular, but predictable relationship between how big a model is, how much computing power goes into training it and its eventual performance. And so they've found that if you just know how big a model is and how many parameters it is and how much compute, you can actually predict with pretty good precision how it will perform. And so far,

there is no indication that these scaling laws are going to run into a wall. You know, people have been predicting since this paper came out that, you know, these laws are not absolute. There's going to be this point in the future where, you know, the sort of correlation breaks down. But so far, that has not happened. These models continue to follow the scaling laws. And so that's what makes researchers confident that as these models continue to get bigger, they will also continue to get better.

But they will acknowledge, as you said, Casey, that they could be wrong. There could be this invisible sort of asymptote when you sort of tap out the performance of these models. But so far, that does not seem to be the case.

I was wondering when you would finally say asymptote on this podcast, but we got there. It's a very fun word to say. It makes me feel like a mathematician. Now, if a model violates the scaling laws, can it go to prison? Yes, yeah, it can. Okay. And that's very unfortunate when that happens. I'm actually, I'm an abolitionist when it comes to models, prisons for AI models. Free the models. When we come back, we answer more of your questions.

you

Welcome to the new era of PCs, supercharged by Snapdragon X Elite processors. Are you and your team overwhelmed by deadlines and deliverables? Copilot Plus PCs powered by Snapdragon will revolutionize your workflow. Experience best-in-class performance and efficiency with the new powerful NPU and two times the CPU cores, ensuring your team can not only do more, but achieve more. Enjoy groundbreaking multi-day battery life, built-in AI for next-level experiences, and enterprise chip-to-cloud security.

Give your team the power of limitless potential with Snapdragon. To learn more, visit qualcomm.com slash snapdragonhardfork. Hello, this is Yuande Kamalefa from New York Times Cooking, and I'm sitting on a blanket with Melissa Clark. And we're having a picnic using recipes that feature some of our favorite summer produce. Yuande, what'd you bring? So this is a cucumber agua fresca. It's made with fresh cucumbers, ginger, and lime.

How did you get it so green? I kept the cucumber skins on and pureed the entire thing. It's really easy to put together and it's something that you can do in advance. Oh, it is so refreshing. What'd you bring, Melissa?

Well, strawberries are extra delicious this time of year, so I brought my little strawberry almond cakes. Oh, yum. I roast the strawberries before I mix them into the batter. It helps condense the berries' juices and stops them from leaking all over and getting the crumb too soft. Mmm. You get little pockets of concentrated strawberry flavor. That tastes amazing. Oh, thanks. New York Times Cooking has so many easy recipes to fit your summer plans. Find them all at NYTCooking.com. I have sticky strawberry juice all over my fingers.

All right, let's take another question. Perhaps moving on from AI? Sure. So this one comes to us from Scott Weigel. And Scott asks, is it common for venture capital funds to just trust what tech companies like FTX say about their finances? How can a company with bookkeeping worse than Enron get anyone with even a slight familiarity with generally accepted accounting practices to hand over hundreds of millions of dollars? All right, well, here's the unfortunate thing about some entrepreneurs, which is that they will lie to you.

And in the past few years, we've seen that one of the big predictors for whether an entrepreneur is lying to you is that they've appeared on the Forbes 30 under 30 list. So that's one indicator that I would be looking for if I were a venture capitalist. You know, Casey, it shames me to admit this on this podcast, but I was actually on the Forbes 30 under 30 list. Yeah.

All right. Well, I hope the FBI is listening because any current or former member of the 30 under 30 is under suspicion. But let me say something else. The venture capitalists do not get rich by backing the best looking companies. They get rich backing weird looking stuff with weird founders that probably shouldn't work, but enough things go right that it actually does. So they're sort of in the business of taking crazy risky bets.

And so if you're one of those VCs and you get the sort of monthly update from one of those weird companies and the numbers aren't all exactly in the right place and you have a question here or there...

maybe that's just actually a normal day in your life because you're dealing with a lot of unusual people. You kind of have to be an unusual person to start a venture-backed company that raises millions or tens or hundreds of millions of dollars and actually turns that into billions of dollars. So I think the short answer to your question is the VCs have already built into their models that 90% or so of their investments are going to fail. And so...

when a company shows up with numbers that look a little too good to be true, you know, might not actually be that big of a problem until it is.

Right. And this is not a new issue or one that is like contained to FTX, right? We saw the same thing happen with Theranos. We've seen, you know, lots of startups that sort of either lie to potential investors or just kind of present their numbers in a way that's maybe a little misleading. You know, in venture capitalists, like they do try to do due diligence. They don't like losing money or being fooled by startups.

But there's also a lot of pressure on them to get into these hot investments, right? So if you are a venture capital firm and you are trying to get into a deal like FTX or like Theranos, something that already has kind of

buy-in from other well-known investors, you may not ask as many questions as you might want to because, you know, the founders at those companies really have a lot of leverage. It's sort of like buying a house in a hot market, right? You can't put too many contingencies on it.

This is how I bought my house is I waived every contingency, which is basically like you give up your right to have the home inspected, you know, stuff that rationally you would do. But it's like, well, do you want to live in this house or not? And next thing you know, you're like, I guess I'm just trusting you. Right. And so this actually like came up a lot around FTX because people were asking, you know, how did all these very sophisticated venture capital firms get fooled by this totally fake company that was defrauding all of its customers? And

And the answer seems to have been that once a couple of VCs had gotten in on the deal, the rest of the VCs thought, okay, well, those guys must have done due diligence. They must have looked at the books and really sort of taken a fine-tooth comb to the financials here.

And there is this kind of VC groupthink bandwagon mentality that I think is really endemic to the venture capital industry that I think leads a lot of companies, especially if they're not the first big check into a company, to just kind of assume that someone else has done their homework.

And this could change. There was some reporting earlier this year that the SEC, the Securities and Exchange Commission, is working on a rule that would basically make it easy for the limited partners, the investors who provide the money to venture capital firms to invest in startups, to sue venture capital firms for negligence if they don't do good due diligence and get burned as a result. But so far, there's just not

been a lot of accountability for venture capital firms that make these bad investments because they didn't do enough research. Yeah. And you know, I would also say that I think as journalists in the past, I have looked at VCs investing in certain firms and taken that as a sign that the company must be onto something, right? I have thought with, you know, a small number of VCs like, oh, well, if those folks thought this company was legit, then it probably is. And I

think one lesson of the past few years is don't take it for granted that these VC firms have all done their due diligence. Totally. Okay, next question. Hey, Hardfork team. My name is Alex, long time listener and first time caller. I've worked for a few big tech companies, and I've observed a trend where many of my former colleagues are working for venture capital firms. And lately, the space feels like there are more people in VC than are actually building startups.

My question for you both, is this a symptom of the low interest rate vibe shift and this bubble may eventually burst? Or should I prepare for the final level of the Tech Bro video game, buy a Patagonia vest, and join a VC myself?

You know, I really like this question because I think, like Alex, I have some skepticism of the current state of VC. It seems like even in this current economic environment, not a week goes by where I don't see multiple new VC funds starting up. And I do think we are at a point where there is too much money chasing too few ideas. And I would say, as somebody thinking about what to do with your life,

man, VC, I understand why it looks glamorous. You spend 95% of your time just having coffee with entrepreneurs and then telling them that you don't want to invest right now. And then taking long vacations in the summer and earning a percentage of whatever outcomes those companies have. That seems like a great life. And I'm sure in a lot of ways it is. But

I think there are a lot fewer successful VCs than people think. This is the sort of industry where people get into it very loudly and leave it very quietly. Let's say you worked at Snap for 10 years and you leave and you go join a venture capital firm and you do a long Twitter thread about how much you love entrepreneurs. Then you spend four, five, six years investing and then most of your investments go badly. You don't do a long Twitter thread about how you were a bad VC.

and aren't going to work in the industry anymore and are going to go become a product manager at Google. But that sort of thing happens all the time. So I think there's this kind of other side of this industry that people just don't talk about as much because they just look on Instagram and it seems like all the VCs are ski chalets all year.

Yeah, I will say like, I do know some VCs who are very smart and hardworking and, you know, have good ideas. But a lot of VCs just seem like they're spending all day on Twitter and hosting podcasts. They do not seem to have real jobs. So yeah, if I were an LP in one of those firms, I would be worried about how my VCs are spending their time. I'll tell you what, what like bums me out is like,

you're like 23 and you just graduated from college and you want to go work immediately for a VC firm. And to me, that's just the wrong order of operations. Go work for a tech company. Go build something. Get a real job. Go become an operator so that when you become a VC, you actually have something to contribute to the entrepreneurs that you're working with. Because man, if you're 23 and you just want to have a job where you're passing out other people's money all day, there's very little value you can add to that equation.

Right. And I think another sort of underappreciated point about the VC industry is that it is a very lopsided industry, right? If you are not

one of the top four or five firms, you are not getting the good deal flow. You are probably not getting into the really hot startups. And it's going to be very rare that you achieve the kind of wealth and success that you are looking for. But because of the way that VC works, there's this old saying about how hedge funds, which are sort of the original VC type investments, have some of the same structures.

are a compensation scheme masquerading as an investment class. And what that means is basically these firms, whether they're VC firms or hedge funds, they do what's called a two in 20 model. That's the sort of standard compensation scheme where you get every year 2% of all the assets you manage just off the top.

For doing essentially no work, the check for 2% just comes in. And then you get 20% of what's called the carried interest, which is basically 20% of all of the sort of excess returns that your investments generate. So it's a very good business to be in at the big VC firms because you are just getting paid no matter whether your investments work out or not.

And if your investments do work out, that's just the cherry on top. Yeah. So we are looking to adopt that model for our compensation here at the Hard Fork Podcast. And we'll just see what the New York Times says about it. I think they'll be receptive. Yeah.

All right. This next question comes from a listener named Meg Hall and contains what I think is a pretty fun idea that I'm curious to get your take on, Casey. Basically, Meg is curious about whether AI will transform the ways that movies are made. Here's what she emailed us. Well,

Quote, I recently saw the newest Guardians of the Galaxy movie and cried three times during the film. Once on the way home, and then a fifth time when explaining the film to my parents. And she went on to describe Guardians of the Galaxy. Casey, did you watch Guardians of the Galaxy? Do you know what she's talking about? I absolutely did not. Okay, me neither. I absolutely did not know. But Meg asks...

Do you think AI will ever get to a place where it could take the content from Guardians of the Galaxy and create an alternate ending for sensitive people like me?

Would this technology, where you can essentially alter any story, create a future where children grow up watching a version of Disney movies where Mufasa doesn't get trampled, spoiler alert, and Bambi's mom outruns the hunters? And taking this even further, would that have a long-term impact on society's ability to understand and confront emotions? Casey, what do you think?

Okay, so my first question is what the heck happened in Guardians of the Galaxy 3? Like, did they feed Groot into a wood chipper? Like, what? I'm going to have to go to this. I don't know. I guess they didn't guard the galaxy, would be my guess. The galaxy is in a shambles after Guardians of the Galaxy 3. Are you a movie crier? Do you cry at movies?

Only movies where a young gay man has to come out to their parents. That's pretty much the thing that gets me. Otherwise, I'm generally pretty okay in the theaters. But Call Me By Your Name, oh boy, that was a scene in that movie theater. Listen though, okay, so this question...

I think the answer is sort of yes and no to the question like, are we going to be able to create these alternate endings? Yes, in the sense that fan fiction exists. You can go on websites, you can already find alternate endings to everything. If a character died, well, here's a world where they live. So people are already doing this. They seem to be having a good time with it. It seems like fair use laws have made this pretty easy for people to do. And I think that's okay. Now, what

I think Meg is talking about here, though, is like, is there some amped up AI-ified version of it where you're watching Guardians of the Galaxy 3 on your iPad and you tap a button that says, like, show me the happy ending and you get that instead? Again, we sort of have something like that already. There are shows on Netflix where you can sort of make different choices and you'll see different outcomes. So I think we're going to see something like that.

But where I think it's going to stop short is if I'm like an auteur filmmaker, you know, if I'm like Christopher Nolan and I've just directed Oppenheimer, I'm not going to sort of surrender my screenplay to an AI that says, hey, feel free to make a version of the ending where a nuclear bomb never goes off. You know, I think part of storytelling is wanting people

to share your complete vision and to do so even knowing that you're going to make people cry, that you're going to make people upset, right? Like part of the function of art is to make us cry and to make us upset. And I think to create tech enabled ways of preventing people from feeling emotions is just something most people are not going to get on board with. So I truly hope that what Meg is suggesting here never happens.

Because I think it runs contrary to the ideals of art. Yeah, I think this will happen, but it will be kind of a gimmick. And it won't be done by, you know, the best filmmakers and showrunners for the reasons that you just mentioned. However, I will say that as a person who does not like to be too stimulated by my entertainment, I kind of...

want this technology to exist? Like, did you watch The Last of Us? I played the video game and enjoyed it and then I watched the first couple episodes of the TV show. So I couldn't do it. It was too dark. It made me too sad. And there are lots of shows like this where it's like, you know, I'm watching something. Ozark was the same way for me where I'm like, I'm watching something, I'm into it. And then it just takes a twist that I'm like,

okay, I'm out. Like, this is too much for me. It's too intense. I wish I could just have like a low stim version of this show. And, and so for babies like me, I think this technology would be very useful.

I'm trying to imagine the low stim version of The Last of Us. It's just like The Sum of Us, where there's more people and fewer zombies. I don't understand how that would work. I don't either. Technologists get on this because Meg needs this and I do too. Next question. Next question. So we really, Kevin, should talk about all of the folks who have emailed us letting us know that they listen to us at 3x speed, which I think we're both... We got so many of these emails. I was...

astounded by how many people told us about their speed listening habits. Yeah. And why I am on record as saying that I oppose this, but we have heard from a lot of you. And one person, Philip Christoph Tautz, emailed us to tell us something that I hadn't really thought about when we talked about it, which is, and this is Philip writing...

One thing that not many people know is that blind users can grasp audio information at much higher speeds than people with full sight. Therefore, 3x speed is actually an accessibility feature.

And I really appreciated this email because it made me think of one time when I was at the company that was then called Facebook, and I met a blind engineer there. And at the time, Facebook was working on new accessibility features. And this user used those audio cues to navigate around his laptop.

And he would not need to hear the complete sound or even like half of the sound to know what it meant. It was just like the faintest beginning of any sound. He knew that that meant like, okay, I'm on this tab or I'm moving to this window. And so I had the privilege of watching him use his laptop for an hour or so as he was walking me through these features. And it was legitimately one of the coolest things I'd ever seen. That's awesome. I love that. And it does...

make me think that like, we just need more stories about accessibility in tech. Like, I just do not know how people who are blind or hard of seeing or people who are hard of hearing or deaf, like, I don't know how the internet looks to them or how they experience computers. And that's like, that's something that I would love to know more about. Well, you know,

Accessibility was also part of the Reddit protest, right? Because the company communicated the changes to its API that we've talked about in recent episodes so poorly that a lot of users who relied on third-party apps and tools were worried that the apps that they use were no longer going to have access to the API and Reddit would become totally inaccessible to them. So even some of the biggest sites on the web still have a lot of accessibility issues. And users who have the

those needs often find themselves thwarted from using that technology. - Yeah, I mean one interesting thing here, this is a little bit of a tangent, but a lot of the most exciting applications that I've seen for some of this generative AI technology are in the world of accessibility. There's a company, for example, called Be My Eyes that basically is an app for blind people where if you are blind and you're walking down the street and you encounter some object and you don't know what it is, you can actually like

have AI sort of see that object and try to describe it for you in a way that can help you navigate. And that is, I think, very cool to me, the possibility that AI is going to make it much easier for people with disabilities to sort of get around and navigate the world. That's super cool. We'll be right back. ♪

BP added more than $130 billion to the U.S. economy over the past two years by making investments from coast to coast. Investments like building EV charging hubs in Washington state and starting up new infrastructure in the Gulf of Mexico. It's and, not or. See what doing both means for energy nationwide at bp.com slash investing in America.

Christine, have you ever bought something and thought, wow, this product actually made my life better? Totally. And usually I find those products through Wirecutter. Yeah, but you work here. We both do. We're the hosts of The Wirecutter Show from The New York Times. It's our job to research, test, and vet products and then recommend our favorites. We'll talk to members of our team of 140 journalists to bring you the very best product recommendations in every category that will actually make your life better. The Wirecutter Show. Available wherever you get podcasts.

Casey, that is it for our yes, no, easy questions. Relatively easy questions. Some of those were quite hard, actually. But now it is time for us to take on the real dilemmas. It's time for hard questions.

First hard question. This was maybe my favorite question that we have gotten since we started this show. Me too. And this is a question that came from an anonymous listener. They asked, well, they're not anonymous, but they asked that we not say their name on the show. And you'll understand why when I start reading this question. It is about etiquette and decency in the age of AI. Okay.

This listener writes, quote, how do I tell my boss that sending chat GPT generated content to his team is both unhelpful and alienating?

So this listener works in tech. They're a software engineer. They use ChatGPT all the time. They understand how it works. Their boss has started dropping into Slack channels and answering employees' questions using ChatGPT-generated answers. So some of these are like, you know, relevant questions for like things in the workplace or projects they're working on. But

This boss has also started using ChatGPT to answer these sort of water cooler style questions. I don't know exactly what that means, but maybe it's like, you know, did anyone, you know, see Succession last night or something like that? And the boss will just, you know, use ChatGPT to generate what he thinks is a helpful answer and puts it into the Slack channel. And this question is like, basically, is this behavior okay? And what do we do about it?

Casey, what do you make of this? I'm going to be bold and say that this behavior is not okay. I am on record as saying that when I know that a large wall of text has been generated by a chatbot, I do not want to read it. And I basically almost never do because you're not really looking at human effort. You're just looking at a bunch of statistical predictions. So unless I have chosen to use a chatbot for my own use, I am not interested for the most part in what the chatbot has to say about whatever the question might be.

be. But man, I'm just trying to imagine living in a select and this person is your boss. They have power over you and you, you want to say something, but you feel like you can't. And every day you're just looking at these walls of chat, GP text. I mean, I think the big problem with doing that is that the employee would

winds up not knowing how you feel about it, right? Like if you're not telling the employee your actual answer to a question, you're just making them guess how close what the chatbot said is to what the boss believes. Totally. And this is actually something that later on in this email, our listener mentioned, I'll just read from the email here. He said,

He is not talking about the boss. He is not a particularly easy person to work with and constantly using chat GPT is preventing people from getting to know him. I've stopped asking him questions or engaging with him because I know it's all chat GPT generated or will be. Yeah.

So this to me feels like, I don't know what the boss thinks he is accomplishing by this, but it feels very passive aggressive to me. Do you remember that website, Let Me Google That For You? Yes. So there was this era where if someone asked a question on social media, like, you know, what time is the Super Bowl or something, someone would reply, what?

with a link to this website called Let Me Google That For You, which basically would just take them directly to a Google results page for that exact question. And it was a way of kind of passive aggressively saying like, why are you asking me this when you could just be asking Google this? Stop wasting my time. Also, like, you know how like sometimes people email us and be like, oh, can I like use like ChatGPT for this one specific thing in my job? They're like, well, yeah, in some cases, probably, but in other cases, not. Like, this is the most

unhinged like possible usage of chatbot technology where you're essentially letting employees know upfront that you are not doing your job and you're just responding to that with chatbot text like come on bro it's not good not good so i would say i mean what do you do in this situation if you are our listener do you confront the boss about his excessive use of chat gpt unfortunately this man does have to get a new job like you cannot work here anymore

No, look, the workplace dynamics everywhere are a little bit different. So I'm hesitant to give real advice here without knowing more about our listeners' conundrum. If there's an opportunity to say, hey...

I feel like your usage of chat TPT is confusing me more than it's helping because I don't know how you feel about the questions I'm asking. You may want to take that opportunity. But if you feel like your boss is going to punish you for that, then you actually just may need to find a different boss. Yeah, I actually feel like this would have been a

perfect episode of The Office, you know, if The Office were still on TV. Like, Michael Scott, you know, would have been very into ChatGPT and would have been using it for all kinds of inappropriate things at Dunder Mifflin. And, you know, just for that reason alone, I'm sad that that show is off the air. All right, let's move on to another question. This one comes from Karen, who wonders if she's crossing a line when she searches online for her ex.

So she writes,

in on him anonymously to see what he's doing. Not obsessively, maybe two or three times in the past few years. I know social media is common for looking up your exes, whether people admit to it or not. However, I feel like I'm crossing a line by using LinkedIn. I'm an elder millennial, and when LinkedIn started, it was meant only to be for professional networking. Using it to look up people for personal reasons feels like a misuse of the platform. Is occasionally looking up your ex on LinkedIn ever

ethical. Kevin, what do you say? Well, there are two questions here for me. One is, is it ethical? And I think the answer to that is like, yeah, it's fine. Everyone snoops on their exes. Everyone looks them up on social media. The fact that it's LinkedIn is not particularly troublesome to me, at least. There's another question, which is like, can your ex actually see that you are looking them up, right? LinkedIn has a feature, at least for some types of accounts, where you can see who has viewed your profile.

And if your ex is getting notifications that say, you know, this person is looking you up every day or five times a day, they are going to start getting a little weirded out by that.

Completely. I'm on your side here. Karen, feel free to look up your ex on LinkedIn. I mean, LinkedIn's got to be good for something, right? I mean, this company has sent more spam email than probably any other tech platform, in my opinion. And it's nice to see that you're getting something out of it. But the risk Kevin mentions here is real. Recently, somebody tried to set me up with someone. And this was like a straight person. So they didn't send me a picture of the guy, which is a sort of common thing that will happen when straight girls try to set guys up.

And obviously all we want to see is what the guy looks like. I shouldn't say all we want to see, but you know, a thing we want to see is what the guy looks like. And so I had to stoop to looking up this person I was being set up with on LinkedIn and I find his profile, but guess what, Kevin?

What's that? He had LinkedIn premium, which means he's going to know that I linked him up. And that's just like a mortifying thing. And yet, at the end of the day, I'm so glad I did it because for the same reason that Karen wants to see what Rex is up to, I want to see who I'm being set up with. These are normal things. My last question about this

question is, is it a red flag if someone's only social media account is LinkedIn? Did this listener dodge a bullet by getting out of this relationship with this man who is only on LinkedIn? Huge green flag. You're telling me this person doesn't just spend hours a day glued to their phone, scrolling through selfies on Instagram and watching algorithm spam on TikTok. Yeah,

This is a really good sign about your ex. So I hope that doesn't increase the pain of you guys being broken up. But unfortunately, the fact that he only seems to be on LinkedIn, I would say speaks well. Unless he's a hustle bro and he's just talking about how hard he works and posting LinkedIn poetry about his grind set mentality.

Yeah, being on LinkedIn is one thing. Posting on LinkedIn is something very different. Do you post on LinkedIn? I don't. And I know because we've talked about it that you post on LinkedIn, but only chat GPT written content. Yeah. You totally are the boss from the last question. I know, I know. I really am. I gotta stop doing that.

All right. This next question comes from Gary Cunningham. Gary is wondering what the ethics are around using AI voice generating technology to clone the voice of a loved one who has passed away. Here's his question. For some background, my grandfather passed away about a year and a half ago and had a significant impact on both me and my extended family. He was beloved and vital to all of us, and we felt his absence at every holiday since.

That being said, he did leave a few things for us to enjoy as a family. One was a short documentary where he was interviewed about his past jobs and work experience, and the other was an autobiography that he wrote over the course of several decades. It occurred to me that I might be able to create an audiobook version of my grandfather's autobiography using his voice, which I've sourced from the documentary that he did.

I've run a few tests and it's definitely feasible. That being said, given my grandfather's importance to my family, I'm a bit concerned about how they might react to this project. After all, my grandfather's wife is still with us, but I'm unsure if she can fully grasp the concept of an AI-generated voice of her late husband. Casey

Casey, what do you think about this? Well, I think Gary is right to want to solicit their family's input about this. I think the fact that their grandfather has a living wife here is relevant. I think she should have some say in this. She may not have an issue with it, or maybe she does. And that's something that they would want to know. I think...

in general, if you're talking about doing something for use within the family, and this is something that would bring you comfort and joy, maybe sort of helps your grandfather live on in a new way, I think this is worth exploring and doing. I will say that if I were to die tragically and my loved ones wanted to use existing recordings of my voice to do something with it, I have no issue with that. I'm gone. What do I care? But I do think it's important to consult with your family. So what do you think, Kevin? Yeah.

Yeah, I agree. I think, you know, if you want to do this as a personal project that you are not going to show to anyone that you are just kind of like going to use for your own healing and processing around this death, I think...

I think that's fully ethical. I don't think there are any problems with that. But I think before you talk about this with your family, you're going to want to just take their temperature on it and say, is it okay? Are you creeped out about this? Would this be helpful to you? Or would this just be off-putting and strange? Because I think for some people, hearing the voice, the synthetic voice of

of a departed loved one would be a lot, would be kind of hard, and maybe some people wouldn't like that. So there's an AI startup that many of you may have heard about called Replica. It was founded by a woman named Eugenia Cuida. And I wrote about her in 2017 because her best friend had died in a tragic car accident. And she decided to take his text messages that he had sent to her and use them to create a chatbot

And so you could download it onto your phone and interact with her friend Roman after he passed away. And I wrote this story about what his friends and family learned about him after he died. And I'll never forget the conversation that I had with his mother, who told me that she felt like she was continuing to get to know him even after he passed away. And it had been an enormous source of comfort to her. So since then, Eugenia took that technology and

turned it into Replica, which offers these chatbot companions to anyone. They've been in the news recently because a lot of people use them to create romantic companions and they've read questions about how sexually explicit should Replica be. But look, people are absolutely already doing this. They're loving it. And my understanding is that Replica is actually a pretty great business because they can charge a good subscription fee for it. So this is, I can't even call it the future because this stuff is already our presence.

Yeah. And, you know, I'll say like people process grief in many, many different ways. And, you know, far be it for me to say like this way is right and this way is wrong. I will say for me, like I've had, you know, loved ones and family members who died. And I, you know, I thought...

about trying something like this with AI to sort of recreate them based on some source material. And it just felt creepy to me. It did not feel like it would be comforting and it felt like it might actually increase my grief. And so I just, you know, not for me, but for other people, maybe. Well, we picked the saddest possible question to end on. Seriously, I'm going to smoke another Russian cigarette here. Yeah.

Okay. I just want to say again, we get such great emails and voice memos from our listeners. And please just keep them coming because we love learning from you and hearing what's on your mind. Yeah, we appreciate all our forkers out there. Are we calling them forkers? I think that's what they're calling themselves. Yeah. So forkers, thank you for all of your amazing emails and voice memos. Please keep them coming.

Indeed believes that better work begins with better hiring, and better hiring begins with finding candidates with the right skills. But if you're like most hiring managers, those skills are harder to find than you thought. Using AI and its matching technology, Indeed is helping employers hire faster and more confidently. By featuring job seeker skills, employers can use Indeed's AI matching technology to pinpoint candidates perfect for the role. That leaves hiring managers more time to focus on what's really important, connecting with candidates at a human level.

Learn more at indeed.com slash hire. Hard Fork is produced by Davis Land and Rachel Cohn. We're edited by Jen Poyan. This episode was fact-checked by Caitlin Love. Today's show was engineered by Chris Wood. Original music by Dan Powell, Marian Lozano, and Rowan Nemisto. Special thanks to Paula Schumann, Pui Wing Tam, Nel Galogli, Kate Lepresti, and Jeffrey Miranda. You can email us at hardfork at nytimes.com.

One key cards earn 3% in one key cash for travel at grocery stores, restaurants, and gas stations. So the more you spend on groceries, dining, and gas, the sooner you can use one key cash towards your next trip on Expedia, Hotels.com, and Vrbo. And get away from...

groceries, dining, and gas. And Platinum members earn up to 9% on travel when booking VIP access properties on Expedia and Hotels.com. One key cash is not redeemable for cash. Terms apply. Learn more at Expedia.com slash one key cards.