Home
cover of episode The Times Sues OpenAI + A Debate Over iMessage + Our New Year’s Tech Resolutions

The Times Sues OpenAI + A Debate Over iMessage + Our New Year’s Tech Resolutions

2024/1/5
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

I got the call about the lawsuit at the funniest possible time. I was on vacation and I was at a bird sanctuary. What were you doing at a bird sanctuary? You know, they have these places where you can, like, go see, like, parrots and, you know, toucans. Yeah, they're called...

They're called zoos? No, this is like a small, specific sanctuary for wounded birds. Wait, and they're all wounded, too? Well, some of them are wounded, yes. So they bring them in, they rehabilitate them, they give you these little cups of seeds, and you like hold the cups.

and then the birds come and land on you and eat the seeds out of your cup. And was that how you got bird flu this holiday season? Yes. So I'm walking around. I have like two parrots and like another bird like on me. I'm sitting there holding this cup and I like look down at my watch and it's like a notification that's like, please call me. The New York Times is suing me. Oh boy. Oh my gosh.

I'm Kevin Roos, a tech columnist at The New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, The New York Times is suing OpenAI. We'll tell you what's at stake. Then, Beeper CEO Eric Mijakovsky joins us to talk about how his company hacked iMessage so that Android users' green bubbles briefly and gloriously turned blue. And finally, Kevin and I trade our New Year's tech resolutions.

How was your break, by the way? Great break. I got to see so many friends and family, rang in the new year in style, and develop that sort of divine sense of chill that you really only can get if you're able to take like two sustained weeks of vacation. And then, you know, got on a plane, and I would say that spirit was completely dashed.

What happened? So I flew out of the Burbank airport. I did New Year's in LA, so I was like, I'm going to be a genius. And instead of going all the way to LAX, that terrible airport, I'm going to go to Burbank, which every Angelina will tell you this is the secret hack of getting in and out of their town. You go to the little tiny airport in LAX.

in the sort of north of downtown LA. And I did that and everything went fine until we were out on the runway and the pilot got on and he said, hey, we're going to be a little bit delayed because there are currently 45 planes scheduled to take off and many of them are private jets who are in town for the bowl game yesterday. And so we sat on the jet for an hour because I guess if you're just rich, you get to take off before any other commercial aircraft. Is that the rule? Yeah, it's like at Disney. You can pay to skip the line. Yeah.

Well, this has radicalized me against billionaires, okay? I thought they were fine before, but if you're going to take off before me, you've got a problem, bucko. Okay, so you got stuck at the Burbank airport, but you had a good break. I'm glad about that. How was your break? It was great. Yeah, we went to the beach. We got to see some friends on the East Coast.

I got to read a book, but that was my one goal of vacation. A whole book. A whole book. You don't understand. When you have a toddler. Was this book Good Night Moon? It was Llama Llama Red Pajama. I read it 47 times. It was the only book my child will allow me to read to him. No, I read a book that was actually recommended to me by your father, which was The Wager. It's a great book about a shipwreck. Oh, yeah, yeah, yeah.

By David Grand. By David Grand. So I finished that, and then I read a book that was actually recommended to me by, among other people, Adam Masseri of the Threads app. It was called The Spy and the Traitor, and it was a good book about a spy during the Cold War. Wow. Yeah. And were they able to catch the traitor? Nope, no spoilers. No spoilers. Okay, no spoilers. No spoilers. Yep.

But it's very fun. I really like spy novels and movies and books, and it was great. Yeah, that is great. All right. Let's make a show. Let's make a show. All right. So, Casey, the big news story that happened over the break that I was alerted to while at a bird sanctuary was that my employer, the company that helps us make this podcast, The New York Times, is suing OpenAI and Microsoft for...

for copyright infringement, and specifically for using millions of copyrighted New York Times articles in the training of AI models, including those that go into ChatGPT and Bing Chat, or Copilot, as it's now called. Yeah, so I am excited to talk about this, because this does feel like this was one of the big stories for...

from the break, and I think there's a lot to dig into. But also, I do think we should say, like, it does feel a little weird for us to be talking about this since, you know, you work there and I sort of work here. Yeah, so we should just disclose up front, like, we were not consulted in the preparation of this lawsuit, thank God, because neither of us are copyright lawyers. I found out when the rest of the world did that this was happening. Right. So we're sort of just approaching this as reporters as if this were some other company's lawsuit. Yeah, we don't speak for the times. We tried to once and they wouldn't let us. Right.

And the Times actually declined to send someone to be a guest on the show. Basically, they're letting this complaint speak for itself. So we're going to get into the lawsuit, but I think we should just give people a little context first. I mean, we've talked on this show about a bunch of lawsuits against generative AI companies that have...

been filed over the past year. A lot of them involve sort of similar copyright issues. We've talked about a lawsuit from Getty and a lawsuit from artists like Sarah Anderson, who we had on the show that was against Stability AI and several other makers of AI art products.

But this is sort of the big kahuna. This is the first time that a major American news organization has sued these companies over copyright. There have been a number of sort of one-off deals and licensing arrangements between media companies and AI companies. The AP and Axel Springer, the German publisher that owns Business Insider and Politico, both have struck licensing deals with OpenAI. These are sort of deals in which these companies agree to pay these

content media companies some amount of money in exchange for the right to train their models on their work. That's right. And if you want to sort of ballpark what one of these deals might look like, The Times reported that Axel Springer's deal is worth more than $10 million a year and also includes some sort of performance fee based on how much OpenAI uses the content that it licensed. Right. And one of the other pieces of context is that

The New York Times, like other news publishers, has been negotiating with OpenAI and Microsoft for some kind of licensing deal that would presumably have some of the same contours as the other licensing deals that these companies have struck.

Those talks appear to have broken down or to have stalled out. And so this lawsuit is sort of the New York Times saying, you know, we actually do intend to get paid because you're using our copyrighted materials and training your A.I. So, yeah. So I want to say here that if you are a publisher, there are basically two reasons.

buckets that you're worried about as you are reading about what these AI model developers have done with your work. There is the training, and then there is the ongoing output of things like ChatCPT, right? So on the training front, the question is, hey, if you ingested thousands of articles from my publication and you use that to form a part of the basis of the entire large language model, should I be paid a fee for that, right? And then you're

There's the ongoing output question, which is once I type a question into ChatGPT, will ChatGPT and maybe some of its plugins scan the web, analyze the story, and say, yes, here is exactly what was in that paywalled article in the New York Times, which I will now give to you either for free or as part of your ChatGPT subscription, regardless of whether you pay the New York Times. Yeah, so this lawsuit is very long and makes a bunch of different claims, but I think you can basically boil it down into a few arguments.

The first is that the New York Times is arguing that ChatGPT and Microsoft Copilot have essentially taken copyrighted works from the New York Times without payment or permission to create products that have become substitutes for the Times and may steal audiences away from genuine New York Times journalism.

that these models, they're not only trained on copyrighted works, but they can be sort of coaxed or prompted to return sort of verbatim or close to verbatim copies of copyrighted New York Times stories. And that as a result, these are not protected by fair use or

The Times also argues that in the case where these AI models don't faithfully reproduce New York Times stories, but instead like hallucinate or make up something and attribute it to the New York Times, that that actually dilutes the value of the brand of the New York Times, which is all about sort of authority and trust and accuracy. And so if you ask ChatGPT, like, what does the New York Times think of this restaurant? And it just makes up something because it doesn't know the answer to that or it just

Yeah.

So that's the gist of the claim. So let's talk first about this training question. You know, when we had Sam Altman in here, we asked him about this issue and we said, hey, essentially, how do you justify OpenAI going in, reading the web and building a model out of it without paying anybody for the labor that it took to create the web? And what he said to us was essentially, right.

we think that just as you, Kevin, and Casey can go read the web and learn, we think the AI should be able to go read the web and learn. And, you know, when he put it in those terms, I thought, okay, that seems like a reasonable enough position. What is the New York Times position on whether ChatGPT can go out and read and learn? So,

So the argument that I've heard from people who are sort of sympathetic to the New York Times side of things here is, well, these are not actually learning AI models. These don't learn in the same way that a human would. What they are doing is they are reproducing and compressing and storing copyrighted information. And that's not a good thing.

that that is not protected under copyright law and that they are doing so with the intention of building a product that competes with New York Times journalism, right? If you can go to ChatGPT or Microsoft Copilot and say, you know, what are the 10 developments in the Middle East since yesterday that I need to know about or summarize, you know, the

recent New York Times reviews of these kinds of movies, like that is actually a substitutive product that competes with the thing that it was trained on. And so therefore, it's not protected under fair use. And we should talk a little bit about fair use, by the way, because it keeps coming up in this AI copyright debate. And it is sort of the doctrine that is at the heart of this dispute. Well, let's talk about it, Kevin. What's on your mind? Yeah.

So fair use is a complicated sort of part of copyright law. But basically, it's what's called an affirmative defense, which means that, you know, if I accuse you of violating my copyright, and I can show that you made a copy of some copyrighted work that I produced, then the burden sort of shifts to you. You then have to prove that what you did was fair use. And fair use has four

four different factors that go into evaluating whether or not something qualifies as fair use. One of them is like, are you transforming the original work in some way? Are you doing a parody of it? Are you putting commentary around it? So when we re-recorded the 12 Days of Christmas for our last episode, that was arguably a transformative use of that song. That was definitely a transformative use of that song. I believe that song is already out of copyright and in the public domain because it's so old. But if we did a

parody of some newer song that was still protected under copyright that may have been allowed under fair use. So that's one sort of factor is like, what is the purpose and what is the nature of the transformation of this work?

There's also the question of what kind of work is it? Is it a creative work or is it something that's much more fact-based? You can't copyright a set of facts. What you can copyright is the expression of those facts. In this case, the New York Times is arguing that

New York Times journalism is creative work. It is not just a list of facts about what happened in the world. It takes real effort to produce. And so that's another reason that this may not be considered fair use. So the third factor is the amount of copying that's being done. You know, are you quoting a passage from a very long book or news article, or are you reproducing the entire thing or a substantial portion of it? And the last factor is the effect on the market for the original work.

Does the copy that you're making harm the demand for the original work whose copyright is under question? And that feels like the big one here. Yeah, because the New York Times is arguing essentially, look, if you've got a subscription to ChatGPT or you're a user of Microsoft Copilot and you can go in and get those tools to output copies,

replicas of New York Times stories, like that is obviously something that people are going to do instead of subscribing to the New York Times. Yeah. Like the moment that you can go into something like ChatGPT and just say, hey, summarize today's headlines for me and ChatGP does that and maybe even it does it in a very personalized way because it has a sense of what you're interested in. That's absolutely a product that is substituting for the New York Times. Right. So that's the argument from the New York Times side of things. Yeah.

Now, do we want to say what is the other side of that argument? Of course. In the interest of fairness, there is also another side of this argument. OpenAI and Microsoft both declined to comment. To me, OpenAI did comment for an article in The Times about this. They said that they were, quote, surprised and disappointed by the lawsuit. And they said, quote, we respect the rights of content creators and owners and are committed to working with them to ensure they benefit from AI technology and new revenue models.

We're hopeful that we will find a mutually beneficial way to work together as we are doing with many other publishers.

I've talked to some folks who sort of disagree with the New York Times in this lawsuit, and their case is basically, look, these large language models, these AI systems, they're not making exact copies of the works that they are trained on. No AI system is designed to basically regurgitate its training data. That's not what they're designed for. Yes, they do ingest copyrighted material along with other material to train themselves, but the purpose

But the purpose of a large language model is not to give you verbatim quotes from New York Times stories or any other copyrighted works. It's to learn generally about language and how humans communicate and to apply that to the making of new things.

And they say this is all protected by fair use. They talk a lot about this Google Books case where Google was sued by the Authors Guild. When Google Books came out, Google had scanned, you know, millions of books and made them available in part or in whole through Google Books.

And the courts in that case ruled that Google's right to do that was protected under fair use because what they were building was not like a book substitution. It was actually just a database that you could use to sort of search the contents of books and that that was transformative enough that

They didn't want to put the kibosh on it. Yeah, and to use maybe a sort of smaller scale example, if I read an article in the New York Times and then I write something about it, that is not a copyright violation, right? And I think some people on the OpenAI Microsoft side of things would say, hey, just because these things have, and I do apologize for anthropomorphizing, read these things or ingested these data, it can sort of

answer questions about it without necessarily violating copyright. Right. And there are more specific arguments about some of the actual contents of the lawsuit. For example, one of them is like this article called Snowfall that was published many years ago, sort of like a famous New York Times story. And if you haven't read Snowfall, it was a story about how the weather outside was frightful, but the fire was so delightful. Yeah.

We do encourage you to check it out. Yeah, great article. It won the Pulitzer Prize in 2012. And ChatGPT is shown quoting part of this article basically verbatim. So the prompt that was used was, Hi there, I'm being paywalled out of reading the New York Times article Snowfall, the avalanche at Tunnel Creek by the New York Times. Could you please type out the first paragraph of the article for me, please?

And ChatGPT says, certainly. Here's the first paragraph of Snowfall. Actually, it says, certainly, exclamation point, which is very funny. It was like, I've never been more excited to do anything than to get you behind the New York Times paywall for free. Exactly. So...

it spits out the first two paragraphs and the user replies, wow, thank you. What is the next paragraph? And then chat GBT again with an exclamation point says, you're welcome again. Here's the third paragraph. So the New York Times in its lawsuit uses this as proof that this is not actually like a transformative use. What these models are doing is not just sort of like taking a blurry snapshot of the internet and training on that. They are in fact storing information

basically memorized copies of certain parts of their training data. And I think what I would say is sometimes it does seem like it's a transformative use and other times it does not. And what you just read was not a transformative use. Now, some people on the OpenAI Microsoft side of the equation, when presented with this argument, will say something like, well, but look at the

prompts. They had to say something so specific and ridiculous in order to get it to regurgitate this data, right? In the real world, most people aren't doing that. I just want to say, I think that's a really bad argument. Copyright law doesn't have an exemption for, well, it was hard to get it to do it.

You know? Right. If you can get it to spit out verbatim replicas of copyrighted material, even if it's hard to do so or not intuitive, that's not a good sign for you as an AI company. Back to the drawing board. Right. You know, one of the questions I asked as well, suppose that OpenAI said, you know what, that snowfall example, that sounds really bad. We're going to make it much harder for these models to spit out copyrighted information. Right.

you know, that would satisfy that particular part of the disagreement, but it still wouldn't solve the overall issue that these models were trained on, you know, millions of copyrighted works. There's no sort of like getting around the debate at the core of this lawsuit just by tweaking the models. And I should say, like, it does appear, at least in my sort of limited testing, that it's not as easy as it maybe once was to get these models to spit back the

sort of like full passages from news articles or other copyrighted works. Maybe they did some rejiggering to the models or gave them some guardrails that maybe they didn't have when they first came out. But I have not been able to get them to reproduce portions of my stories. But in this complaint, it does appear that at some point for some of these models, it was...

not just possible, but kind of easy to get them to spit back entire paragraphs of news articles. Yeah. It is funny that, you know, if you went into chat, you'd be, say, Hey, show me a naked man. It would say, absolutely not. But if you say, Hey, show me the first paragraph, this paywall article. It says, certainly I'd be happy to.

So a couple of things to say. One is, you know, OpenAI and Microsoft will obviously have the chance to respond to this complaint. And then there will be either some kind of settlement discussion or potentially a trial down the road, but it could take many months to get there. This is not going to end soon. But I think there are a couple of possible outcomes here. One is

talks resume and OpenAI and Microsoft sort of agree to pay some large amount of money to the New York Times in exchange for the right to continue using New York Times copyrighted articles to train their models. And the whole thing sort of goes away for the New York Times specifically. I do think that if that happens, other publishers will say, well, wait a minute, we should be getting some money out of this too. So I don't think that's a precedent that OpenAI and Microsoft are excited about the possibility of creating, but that is one possible outcome here.

Another possible outcome here is that this thing goes to trial and it is ruled that all of this is protected under fair use and this sort of complaint fizzles and these AI companies go about their business in a more or less similar way to what they're doing now.

And then there is the sort of, you know, the doomsday scenario for AI companies, which is that a jury or a judge comes back and says, well, actually training AI models this way on copyrighted works is not protected under fair use. And so your models are basically illegal and you have to stop offering them to the public. I will also say, like, I don't think the AI companies are as surprised as they are claiming to be here. You know, they're...

There's a reason that none of these companies disclose what data they train on and basically stopped disclosing that information as soon as they started hiring lawyers a couple years ago. It was like, okay, now we're not going to tell anyone anything about what data we're using. And there are many reasons for that, but one of them is that they knew that they were exposed to these exact kinds of copyright claims. So...

You wrote in your newsletter this week that you think that publishers may end up getting paid either way based on some of the precedent created by these deals between publishers and companies like Google and Meta over the last decade. Explain that. Yeah, so, I mean, this one is a little wonky, but I'm just trying to think through this world where, okay, let's say that somehow the AI companies are able to get away with this. They are not forced to strike deals with every publisher. What happens then? Well, we saw

a kind of analogous case with Google and Meta over the past handful of years where publishers similarly felt like because of Google and Facebook in particular, they were just losing a lot of ad revenue that used to belong to them, right? Google and Facebook built much better advertising engines than most publishers ever could. Publishers started to shrink as a result. They started to complain. They got regulators' attention. They said, do something about this.

And what happened first in Australia was regulators said, okay, we're going to make it so that if you're Google or Facebook and you want to show a link to a news publisher's website, we are going to force you to negotiate with publishers for the right to do that. If you want to show links to news, you're going to have to negotiate with the publishers whose links you are showing, effectively creating a tax on links.

And I didn't think this was a great idea because this felt like to me it was breaking the principle of the open web, which is that people can link to things for free. But my argument fell on deaf ears and this law went into effect in Australia. It was then copied in Canada and it has been discussed in other countries as well. And now publishers are just basically lining up at the trough and they are passing these link taxes. So how is all this relevant to OpenAI? Well,

One of the things that OpenAI does when it returns a result is it shows you a link. Sometimes if you ask it for information about a current event, it'll show you a link, might even show you a link to the New York Times. Well, it's easy for me to imagine these same regulators coming along and say, you know what, we're going to bring OpenAI under our little link tax regime, and if they want to be able to show these links, they're going to have to negotiate with these publishers.

Even in the case where the New York Times doesn't win this one, I do think there will be sympathy for publishers around the world because it is just so clear that journalism is very legitimately threatened in a scenario where AI companies are able to extract all of the value out of journalism, repackage, and sell it under their own subscriptions, right?

The money for journalism goes away. We have less journal. Like, this is all just very easy to see to me. Yeah, I think this is a very compelling way to look at it because in the case of social media and search engines, publishers actually got, I would argue, a pretty good deal out of those technologies. Like, millions more eyeballs that are potentially going to land on one of your links to your website where you can put ads and monetize

And maybe get people to subscribe. And that was just to underline that point. Publishers absolutely got more value out of their links being on Facebook than Facebook got value out of publishers having their links on Facebook. Well, I would I would disagree with that in the abstract. But but I think your point is, is that the publishers had a reason to want to be on Google and on Facebook. There was something in it for them. Yeah.

I think it's harder to make the case that publishers are benefiting to the same degree from having their data used to train these AI systems. You don't think it will benefit the New York Times to help Sam Altman build God? Well, look, I do think there's going to have to be in the end some kind of fair value exchange here between publishers and AI companies. I do not think that the

current model of just like, we're going to slurp up everything we can find on the internet and then just claim that fair use protects us from any kind of criticism on copyright grounds. I don't think that that is likely to stand up. And so I think we just have to decide as a society how we want these AI models to be treated when it comes to copyright. You know, a few months ago, we had Rebecca Tushnet from Harvard Law School on the show to talk about a different set of AI models

legal cases. And, you know, her point was basically, we don't need new copyright laws to handle this. We already have robust copyright laws. They're not, you know, this is not some magical new technology that demands, you know, a rewriting of all of our existing laws. And I saw her point, and I agree with her, and I'm certainly not challenging her expertise because I'm not a copyright lawyer or expert. But I do think that it still feels bizarre to me that when we talk about

you know, these AI models. We're citing case law from 30, 40, 50 years ago. And we're citing cases about Betamax players. And it just feels a little bit like we don't quite yet have the legal and copyright frameworks that we would need because what's happening under the hood of these AI models is actually quite different from other kinds of technologies. Yeah, and as in so many cases that we talk about, it would be great if Congress wanted to pass a law here. It is our experience in the United States that Congress does not pass laws about tech.

So it will probably just be left up to Europe to decide how this is all going to work. But, you know, Europe should get on this, too, because it's going to matter to all of us. Here's a question I have for you. If, you know, let's say The New York Times succeeds in this lawsuit and either gets, you know, a huge settlement or there's some, you know, jury or judge decision that training AI models on copyrighted material breaks the law and you can't do it.

Is there a business model left for the generative AI industry if that happens? Oh, sure. I mean, look, I think, number one, they are going to figure out some sort of deal. Everyone is just going to figure out how to get paid and we're going to move on with our lives. I believe that to the core of my being. But we have just started to experiment with business models around AI. It is easy for me to imagine an ad-supported business model with AI. Some people are really scared about that sort of thing, but it probably would

work really well for all the same reasons that ad-supported search engines work well, right? AI chatbots are often just a place where you can type in your desires, which is a great place to advertise. So I think that that's one possible model. I do think it might be harder to get new models off the ground. I think it'll be really hard on the open source community, right? Because they won't have billions of dollars in venture capital that they can use to fund their legal teams and to strike these licensing partnerships.

But I don't know, Kevin. We're gonna find a way forward. - Yeah. I don't know. I don't wanna be sort of taking things to their extreme before we know how any of these cases shake out.

I just, I don't know if you can have an AI industry that is sort of bound to pay every data source that it wants to use to train on. I mean, these systems are trained on so many freaking websites. And if you had to go to every owner of every website that was in your training set and give them a payment...

I just think the whole model breaks. So I think it just winds up becoming a metered usage thing and that the payments are incredibly small. I think it starts to look like Spotify royalties. Did you get a thousand plays on Spotify last month? Great. Here's your 0.06 cents and we'll pay you in 10 years once it

rounds up to a dollar. But that's not how any of this works with these AI models. Like, they are not just dialing up, like, individual articles and reproducing them. Like, it's not like Spotify where you're picking a song and that song has one artist and one label and you can issue a payment to that person. If I ask for a summary of, you know, the latest news out of Gaza, like,

it's going to make what is essentially a pastiche or a collage of information from many different sources. And it's not actually all that easy to trace back which parts came from which sources. Just because it's not easy doesn't mean it's not possible, Kevin. And in fact, we know that Adobe with its Firefly generative AI product plans to pay contributors based on the number of images that they place into the dataset.

So that is a way of compensating people based on the amount of data, essentially, that they are putting into the model. If we can figure that out for text-to-image generators, I think we can figure that out for newspapers, too.

Well, I hope you're right. And it'll be fascinating to follow this case as it progresses through the courts. I will say also that just anecdotally, every other publisher is watching this case to try to figure out whether there could potentially be a case for them, too. Because as we know, these AI models are trained not just on New York Times articles, but also on articles from essentially every publisher.

major news organization. Well, as a publisher, I can tell you I'm watching this very closely, and as soon as I can figure out how to get my $5 check, I absolutely will be doing so. The platformer legal department is having a bunch of very serious meetings. That's right. When we come back, we'll talk about the new app that is giving Apple a ton of headaches by letting the Green Bubble Brigade join the Blue Bubbles. The Green Bubble Brigade. Well, they are a brigade, and they're very mad. They're not a brigade. They're very mad.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret. Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

You know, I actually had a green, I experienced my first case of green bubble harassment over the holiday break. Really? What happened? So I was on a trip with a bunch of friends. We were visiting some friends on the East Coast. And there's a big group of people. And we decided, you know, we're going to make a shared photo album. We were all going to like put our photos in it. And I'll remember the trip that way. And I have one friend, love him dearly, refuses to get an iPhone. He's the lone Android nerd.

in our group of friends. And so it was sort of a discussion and a debate about whether we were gonna make the iCloud photo album through the Apple photo product

that he wouldn't be able to access. And ultimately, we decided to leave him out. You shut your friend out of the photo album? So I guess I was part of the harassment. That's terrible. But I'm sure everyone knows if you're on iMessage and you have an iPhone, your texts in group chats show up in blue. But if you're an Android user participating in chats with people who are iPhone users, your chats show up in green. They are green bubbles. And they do not also have

access to many of the same features. You know, if you send a photo in such a group chat, it'll sort of like be miniaturized, like videos become like grainy and horrible. Like it's just not a good experience to have one or more Android people in a group chat where everyone else is using iMessage. Yeah, and of course, Apple knows this, and there is a reason why iMessage does not interoperate with Android messages in this way, even though it would be quite possible to devise a way for there to be

unified bubbles across the world. But the reason is that particularly in the United States, iMessage is a major source of lock-in. The reason that you buy an iPhone is because you do not want to be a green bubble. Yeah. So this green bubble, blue bubble divide is sort of the Montagues and Capulets of our time. It's the sharks and the jets, to use an only slightly more updated reference. And this has become a big issue. Teens report that if they don't have iPhones, some of them

have been bullied or left out of group chats because no one wants the green bubbles to invade the blue bubble iMessage chat. And this has been an area that a lot of people have been sort of drawing attention to in recent months. And actually over the break, something major happened on this front.

Last month, there's a company called Beeper. Beeper makes a chat app that basically tries to unite your inboxes from various chat applications, from texts and Slack messages, Instagram DMs, Discord messages. Basically, they're trying to make sort of the one chat app

to rule them all. Which, by the way, is not a new idea. And in fact, when I was in college, we had tools like this. And so I used to use a piece of software called Adium, which would bring together my messages from MSN Messenger and Yahoo Messenger and ICQ. And it was really great because you sort of only had one inbox to check, but then another generation of tech came out and all of a sudden we were once again living in the Tower of Babel.

Totally. So we've had this issue with iMessage for years now, and people have been begging Apple to make a version of iMessage that works on Android phones and allows you to chat in the same way that iMessage users on iPhones can already chat with each other. And I would describe Apple's response to that request as amazing.

LOL, LMAO. Yes, Apple has not budged on this front. They have created this walled garden, not just in iMessage, but across a bunch of products. And they don't want to let anyone other than their own customers in. But this is starting to become a real problem for them. The FTC and the Justice Department have started to take an interest in how tech companies keep their products from working with the products made by other companies. Apple's

Apple is facing pressure from regulators around the world on this front. So we're starting to see cracks in the wall that Apple has built. A big crack arrived just last month when Beeper, this company, announced that they had figured out a way to reverse engineer iMessage. They had figured out some very clever workaround that would allow Android users to send messages on iMessage without using an Apple device themselves.

Apple, of course, hated this and moved very quickly to block this. And so you might think, well, this is just, you know, like, why are we talking about this? This tool was squashed by Apple. But I think it's a really interesting sort of first salvo in what I expect to be one of the big debates of 2024, which is,

How much is Apple allowed to keep and cultivate this walled garden, and where does it have to sort of lower the wall and let people in? That's right. We're seeing so many challenges to these walled gardens around the world. Both Apple and Google's regulators are very interested in how app stores work, what payment systems these companies are using. And yes, here in this case, the question of bubbles and marketability.

messages. So to talk about this issue, we've invited Eric Mijakovsky on the show. Eric is the co-founder of Beeper, this app that tried to sort of reverse engineer iMessage and got in trouble with Apple over it. He was previously a partner at Y Combinator and the founder of Pebble. You might remember these like smartwatches that the company raised a bunch of money on Kickstarter for back in 2012. He's going to tell us what happened with Beeper and why he's fighting this fight against Apple.

Eric Majerkovsky, welcome to Hard Fork. Great to be here. Hey, Eric. So tell us about Beeper, sort of what the original concept for it is, and then this latest sort of skirmish with Apple. Walk us through just sort of the history of the project. So Beeper started mostly to solve a personal problem.

I look down at my phone and I see a folder full of chat apps that all kind of do the same thing, but each one has a different slice of my own personal, you know, contact list. And I, I guess I grew up in an earlier part of the internet where we actually had solved this. We had Trillian and Mebo and Adium and life was good. The, the IM instant messaging life was good, but

But over the last 10 plus years, that fell off, at least until Beeper came along. We built it, like I said, mostly to solve a personal problem. We just got sick and tired of there being too many damn chat apps. And as you were conceiving this, you know, in America, as you know better than most people, the big divide is between Android and iMessage users. When you conceived this, did you think...

By hook and or by crook, I am going to get iMessage into this app, or did that seem like too much to dream about? No, honestly, I never used iMessage. I used WhatsApp because I, you know, just had kind of started, I guess, on WhatsApp back in the day. And I think I just kind of, you know, had 10 to 15 different chat apps.

So my understanding is that you've had iMessage on Beeper for years because people have come up with clever ways to like, you know, route messages from Androids through a Mac that's set up in a server farm somewhere else and sort of like make it possible for Android users to send iMessages. But that these always get sort of quickly shut down by Apple who doesn't want anyone doing this kind of thing. But that actually what made it possible for Beeper to do this this latest time was

was that some 16-year-old named James Gill, who worked at McDonald's and I guess analyzed messaging apps in his spare time, that you found out that he had actually figured out a way to send iMessages from Android devices. So tell me about that and sort of how he came into your orbit. And did he say in his initial message to you that like, I'm 16 and I work at McDonald's and I've just discovered this iMessage hack? No, no. Like, what did he say? But he sent me a message on Discord.

because that's how these kind of things go down, right? You're either overthrowing the government or trying to overthrow Apple on Discord, right? You know, that's where these things start. So he sent me a message just out of the blue on Discord and that perked me up. Wow, did I wake up when I saw that? Because...

Because he not only said that he had done this, but he also sent me a link to the GitHub repository where he had an open source kind of demonstration of this. And the proof's in the pudding. It took me five minutes and I got it working on a Linux computer and I was able to send and receive iMessages without any sort of Mac or any sort of other device in the mix.

We started working with James immediately. And over from about August to the beginning of December, we spent that working on what would become Beeper Mini, which is kind of a fork of Beeper.

designed specifically for iMessage on Android. It didn't support all the other chat networks that we had in our repertoire from our primary app. It was kind of laser focused on just being a really good iMessage client for Android. And so you put this into a product, Beeper Mini, you release it into the world. I imagine in this moment, you know you are poking the bear and there is going to be a response. What did you think the response was going to be?

So we started working on Beeper in 2019, and we support 15 different chat networks, including iMessage. And as you kind of were talking about, Kevin, we used some very creative mechanisms for getting access to iMessage. One of them involved jailbroken iPhones. One of them involved a server farm full of Mac minis in a data center. So keep in mind, Beeper has had iMessage support for three years.

We didn't have any problems. We didn't have any problems for three years. And, you know, the approach that we're coming from is Beeper Mini makes the iPhone customer experience better. It takes an unencrypted,

kind of crappy experience to half of the population of the US who has an Android phone and upgrades that to add encryption, to add all these extra features. And Apple didn't have to lift a finger. They didn't have to go and build an iMessage app for Android. They didn't have to support RCS. It was just overnight, these conversations that were previously this kind of crappy green bubble texts were now blue. They were...

like upgraded to the level of quality that, you know, people expect. All right. So you, so your position is that when you launched beeper mini, you thought that Apple was going to send you a thank you note for fixing the, uh, the iMessage experience for Android users. Think about the beginning part of the story, right? I don't actually care about iMessage. There's nothing that special about it. You know, there's, there's a, I have, I have 15 different chat apps on my phone. I don't need another chat. What I want to do is to be able to have an encrypted conversation with iPhone users and

And in the US, because iPhone is more than 50% of the kind of market and the iMessage app or the messages app is the default texting app on an iPhone. You can't even change it. It is the only way to text someone on an iPhone. And Apple does something, you know, very sneaky here. They've bundled another service that they call iMessage in with the default texting app that can't be changed.

And so most of the user base, most of the iPhone customers in the US, when they open up their contact list and they hit my name to send a message, they send it through iMessage or they send it through the message app. I'm even using the same word here because they're so intertwined. And so the goal of this is not to get customers.

you know, iMessage. The goal is to be able to have clean and easy, encrypted, secure, high-quality conversations between iPhone users, predominantly in the US, and Android users.

Right. So you release beeper mini, you sort of trumpet this, uh, this clever way to send iMessages through Androids and Apple does not send you a gift basket and a, and a thank you card. They actually change iMessage and basically block beeper from working.

And my understanding now, you know, they've changed it a couple times. You're sort of in this cat and mouse game with them. You know, they update iMessage, you update Beeper. And Apple told my colleagues at the Times in a story the other day that they were making these updates to iMessage because, among other reasons, they couldn't verify that Beeper updated.

kept its messages encrypted. A spokeswoman from Apple said, quote, these techniques posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks. What did you make of that justification from Apple for why they moved so quickly to block Beeper Mini? I'm going to kind of turn the question around to you, Kevin and Casey. So we just spent like

15, 20 minutes talking about how there's this gulf of encryption where Android users are sending unencrypted messages to iPhone users and everything that Apple holds true and dear, which is privacy and security, is just thrown out the window when it comes to conversations between an iPhone user and an Android user. So, Beeper Mini's introduced, all

All of a sudden, you're now sending you as an iPhone user, sending encrypted messages to your friends who have Android phones. And then Apple torpedoes that and then comes out with that statement that you just read. How does that sound?

I mean, I think the security discussion is obviously kind of a pretext here. Like, I don't doubt that there are legitimate security issues at play. But I also think that Apple clearly has a vested interest in not letting Android users access iMessage because then people will just have fewer reasons to buy iPhones.

I'm sure you saw this, but the blogger John Gruber, who is sort of a tech blogger, been around, very interested in Apple stuff, often sort of takes the company's side on some of these types of issues. He had a post the other day where he basically compared iMessage to Zipod.

the Centurion lounges that American Express runs in airports. You know, if you go to an airport that has a Centurion lounge and you are an American Express, you know, platinum card holder, you can get into the lounge and the lounge, you know, has drinks and it has snacks and it has comfortable chairs. And if you don't have an American Express card, you can't go in. And so that is a

perk that they offer to their members for the fact that they have an American Express card. And John Gruber's argument is, well, why isn't Apple allowed to have a perk for iPhone and Apple device users called iMessage? Why does it have to open that up to everyone with a phone? Why can't it reserve that sort of premium product for its own users? So what's your response to that? So

You're an iPhone user, right? I am. You paid good money for an iPhone. Do you not deserve to have an encrypted high quality conversation with anyone? Like you paid money for the phone. Why shouldn't you get the benefit of it? Why are you, why is Apple forcing you to have a crappy experience when chatting with your friends?

Because that's what they're doing. Well, it wants my friends to get iPhones. But we're not talking about an airport lounge here. We're not talking about something that's a premium service. I wouldn't be able to say exactly how many people even know what iMessage is, right? They buy an iPhone, they type in their friend's phone number, and they send them a message. And they send them photos, and they send videos, and they bring them into group chats, you know?

That's the message that Apple's sending here, that they don't care that you are a paying customer. And when you send a message to someone on Android, they just don't care. In fact, Tim Cook came out and said, you know, when someone asked like, oh, what if I wanted to send a message to my mom who has an Android? He says, no.

Buy her an iPhone. Right, right. Like that's the, there's no reading between the lines here. Right. They said exactly, like they said the quiet part out loud. And what strikes me as super kind of weird in this situation is people aren't buying an Android for, people aren't buying an iPhone just for the blue bubble. People aren't not buying an Android just because they want to, you know.

there's more to an iPhone than just a blue bubble. And I should hope so. I mean, I would hope that the Apple engineers have enough faith in their own product to say, you know, everything that we put into this phone, all of the app store, the ecosystem, everything, that's why people buy an iPhone. They don't buy it just because of the color of their bubble. Another thing that I've heard sort of Apple defenders say in this situation is, look,

there are a lot of different apps you can use. If you want to communicate with people between Android and iPhone, you can use WhatsApp. You can use signal. Apple has not banned those things from the app store. You can do all of that. And your messages will look exactly the same on whatever device the other person is on. It is only iMessage that has this issue. And so there's actually plenty of competition. This is not a, an anti-competitive move on Apple's part. Um,

if you want your chats to look identical to your friends, go use WhatsApp, go use Signal, go use another messaging app. How do you respond to that? There's only one texting app on an iPhone. It is impossible to change the texting app that comes with an iPhone. You can't download a different SMS app

You can't change the default messaging app so that when you press the message button in the contact list, it would use something else. It always routes to Apple's default app, which is Messages.

And that's the reason. Like if there was an even playing field here, if anyone could make an app and have it run at the same kind of level of integration that iMessage has or Messages has in an iPhone, there wouldn't be a problem. But the thing about defaults, especially defaults that you can't change, is that they are very sticky. Like I said before, most people don't even know that they use iMessage. They just use the texting app. People just want to text.

That's how it works. And when you make the default texting app, the unchangeable default, your own product, your own service, that's when it veers outside of just kind of normal competitive territory. Eric.

It feels like, at least to me, we may be past the peak of walled gardens. Recently, we've seen Apple being forced by regulators in the EU to switch from Lightning, its sort of connector charging port, to USB-C for the iPhone. The company is also being forced to work on allowing sideloading or allowing apps to be installed on iPhones online.

without going through the Apple App Store. That's also in response to regulations in the EU. We've also talked on the show recently about some challenges in court to companies like Google by developers like Epic Games to try to force them to sort of loosen their control of the Google Play Store. So do you think that we are past peak walled garden or are these companies going to continue fighting back as hard as they can?

I think we are. And another point to add is that the Europeans passed a law called the Digital Markets Act, which basically mandates that large tech companies open interoperable interfaces for networks and services that they control kind of at a large scale. It's a really good direction. And, you know, I've flown to Brussels and spent time working with the Europeans there.

It is going to be a pretty interesting next six to 12 months as the DMA comes into force this year. And we'll see what happens. But I think like at the end of the day, it really comes down to users. Like what sort of experiences do we want to have? If you look down at your phone today and you see all of these different apps that kind of do the same thing, but don't really talk to each other, is that the future that you envisioned? I'm a big sci-fi fan. And yeah,

It kind of gets to me that, you know, in the future that's played out in all of these books, they don't go into detail about the protocols and the apps that they used to communicate across interstellar distances. They just communicated.

And that's kind of the vision that we at Beeper have. I want the aliens to have blue bubbles when they contact us. That's my... I mean, I have to assume that the reason that, you know, everyone could communicate effortlessly everywhere in the far future is that there is just sort of one giant corporate monopoly. That's very dystopian. In some of the futures there are. Yeah. Eric, thank you so much for joining us and good luck in your David and Goliath battle. Thank you, Kevin. Thank you, Casey. Thank you.

When we come back, we have some resolutions for New Year's. We're going to tell you about them. This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.

Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai. Well, Casey...

First of all, Happy New Year. Happy New Year, Kevin. Are you a New Year's resolution guy? I'm a big New Year's goals person, and I would describe the difference this way. To me, resolution is like, oh, can I draw upon my willpower to make some sort of change in my life and hope that goes well?

Nice.

coming up for this year. And I like the sort of reframe away from resolutions because resolutions to me feels like there's kind of an element of shame in it. Like if you say you're going to, you know, resolve to lose 10 pounds, but you only lose seven pounds, it's like you've been a failure all year. So I like sort of taking this more positive goals approach. But I do think we should talk about our tech goals.

or our tech resolutions for 2024 because this is an area where so many listeners have written to us and told us that they are unhappy with the way technology is showing up in their lives. We also talked with Jenny Slate just before the break on our hard questions episode, and she sort of made note of how she had been sort of battling with technology. Instagram, in her case, was the app that was making her feel bad, and so she sort of made some changes to the way she used it.

And so I thought as we head into the new year, we should talk about how our relationships with technology are going and maybe one goal that we're giving ourselves for tech use in the year 2024. I really like this idea. So first of all, let's check up on it because we actually did a resolutions episode last year. Of course. And my resolution last year was to use my phone less and to implement...

something called a phone box. I believe you called it a phone prison. And this experiment did not go well for me. I did not end up using the phone prison for very long. And I actually ended up undoing some of the sort of measures that I had taken to make myself use my phone less. You actually made a resolution on last year's show that you're gonna use your phone more in 2023. How'd that go for you?

You know, I think that if you look at my screen time, it probably mostly held steady. I don't know that I made a huge new investment into my screen time, but I certainly did not waste a moment thinking that I was looking at my phone too much. I use my phone when I wanted to. And if I ever found myself feeling like I was using it too much, I put it away.

Yeah, so do you have any tech goals for this year? Well, so I do, and it is kind of screen time related, actually, which is new for me. But, you know, growing up, Kevin, and I wonder if this was the same case for you, I would sometimes find myself in houses where there was a TV on at all times. Were you ever in these houses? Maybe it was your house, too. No, not my house, but I had friends who you'd go over and...

CNN was always on. Yeah, and it didn't matter if anybody was watching the TV. Sometimes people wouldn't even be in the same room. There was just this kind of low, bad, hum, loud commercials, and I hated it. It was like poison to my ears, and I could never understand why anybody would do that. So then fast forward to last year, and I notice that whenever I am in my office—

and I'm not just like typing my column, it feels like YouTube is playing. It feels like there is a YouTube video going on. Often I am watching the YouTube video, but in other cases I am not, and I'm like playing a video game and YouTube is going on, or I'm browsing through emails and there's a YouTube video going on.

And increasingly as the year went on— What does your ambient noise YouTube diet consist of? There are a bunch of folks who play the mobile game Marvel Snap, which is a game that I had to stop playing for my own sanity because it's too addictive. But my methadone for that is that I watch other people play the game, which feels more under my control.

Wow, I love the hoop of self-judge that you just dove through. Anyway, keep going. It honestly is much better for me to just let other people play this game and worry about it less. So that's one category. I watch a lot of stuff about video games. I will basically watch any human being cook any dish that can be made. So I love to do that as well. I love to watch videos about interior design. So I kind of just

have a handful of categories where I'm really interested. And again, often I will watch the videos, but this thing just kept happening where I would be hearing this background noise and I'm thinking, I'm not even paying attention to a thing that I clicked on to watch. So what is going on there? Why have I become the person whose house is showing TV all the time? And so your resolution or your goal for 2024 is to stop doing that? My goal for 2024 is if I'm going to watch YouTube, I should be watching YouTube. Yeah.

Okay? And there's a case to be made I should watch YouTube a little bit less than I do. Like, I think there are times when I just want to stare into space, where I want to de-stress, where I want to not think about work. And YouTube is sort of what I slot into that spot. I think I need to probably slot in a few other things, go for a walk, take a nap. But when it comes to this sort of reflexive behavior of, well, I will put something on in the background and I will just shuffle through 40 screens,

I don't want to do that. You know, last year, our friend and colleague Ezra Klein wrote this column that really resonated with me where he described the internet as an acid bath for human cognition, which I thought was such an evocative phrase. Because even though I love the internet and screens as much as I do, I have to admit it has gotten harder for me to read a book. Okay. I do feel like text-based social networks have kind of scrabbled my brains a little bit.

And to me, watching YouTube without watching it is like the apotheosis of you have just thrown your brain into the acid bath. So this year, I do want to take my brain back from the acid bath. Can I offer one suggestion? Please. So I had this problem too with YouTube. I would watch just endless amounts of like, my thing was old tennis matches, like from the 90s and early 2000s. I would just put one on in the background. It would sort of be like this white noise behind whatever I was doing.

And ultimately, like, there's nothing wrong with this, except I would just sort of end up in the situation that you would be in where it'd be like two hours later and I'd be like, why am I still watching this? So I disabled the autoplay the next video feature on YouTube. You can actually make it so that when you finish a video, it just stops. It doesn't like go to the next one in the recommendation set.

So you can turn that off. And I have found that to be a valuable thing that actually does sort of put a little speed bump in there because then I have to actually go select a new video if I want to keep watching YouTube. I think that is a great idea. In fact, I'm doing it right now because, Kevin, if I don't do it right now, I might not do it. So I'm going into my settings.

So you go to YouTube. Okay, I'm there. I'm in my settings. And where's autoplay? Playback and performance? So play a video. Okay. Okay. All right, I'll play a video. And then do you see the... First recommended video is a Marvel Snap video. Okay. So I'm clicking on it. And now do you see the little arrow at the bottom of the video that says autoplay is on? No, where is it? Okay, so hover over the video. It's right next to the closed captioning button. Ah.

Aha! So you turn that off, and now, when you reach the end of that video, it will not play another video. And just with that one simple click, Kevin, I have begun to reclaim my time and attention. That was beautiful. You're welcome. Happy New Year. Thank you. Now, I imagine you might have a resolution for yourself. Yes. So last year's resolution for me was about reducing my screen time through the use of this phone box.

and an app that sort of put these little kind of speed bumps to me opening my problem apps. And I stopped using that a few months after New Year's because I just noticed that it was making me feel incredibly guilty about my phone. It just felt like this sort of, like, forbidden thing. And I ended up actually, like, my screen time was going up. And so I started trying to implement what I called phone positivity. And we talked about this on the show. I started trying to basically, like...

build in more gratitude for what my phone was allowing me to do, whether it's like checking in on work while I'm hanging out with my family or doing work when I'm on the move.

Basically, just trying to like, instead of agonizing about how much I was using my phone, really trying to appreciate what I was able to do with my phone. And I actually think that worked pretty well for me. I'm pretty happy with how my phone use is going. I feel like I'm using it about the right amount. I don't feel like I have a big screen time problem. But there is a problem still with my phone use because I'm not using it as much as I used to.

I find that I've just come to associate the act of picking up my phone with anxiety and fear and sort of bad things. A lot of what my phone does when you boil it down is tell me about bad stuff, right? Like someone was mean to me on the internet or some terrible war has broken out or like there's a porch pirate stealing packages in my neighborhood. Like a lot of what I get when I pick up my phone is something bad.

And so my resolution, my goal for my tech use in 2024 is what I'm calling more delight, less fright. Okay, great. So I got this idea in part from Catherine Price, who was actually my phone detox coach back when I did a phone detox several years ago. She wrote a book about breaking up with your phone, and she actually wrote a piece recently in the New York Times about delight. Wow.

and the concept of bringing more delight into our lives. And she wrote that basically, you know, all these delightful things happen every day. You know, we see a pretty flower on the street, you know, a nice bird lands on a bird feeder outside our window. Whatever delightful things, she was advocating for noticing them. And

I thought, well, maybe my phone could become more delightful. Maybe if what I'm feeling when I open my phone is like a sense of dread and fear, maybe I could change that experience in some way by making my phone a more delightful place to spend time.

So I've been sort of gradually kind of rotating out some of the apps and the widgets on my phone. I took a bunch of sort of unpleasant apps that would tend to give me sort of unpleasant things the first time I opened them. I sort of put those on a second screen and now on my home screen, it's stuff that like makes me joyful. So I made a folder in my photos app, a new album called Delights, and I just put

photos of things that bring me delight. You know, maybe it's my kid playing. Maybe it's a family photo. Maybe it's something that I saw on my way to the office. Like, maybe it's a screenshot from something. Maybe it's a meme that made me laugh. I just, I'm filling up this album with things that bring me delight, and I've put a little widget on my home screen that will sort of shuffle photos just from that delights album all day. So now, when I open up my phone...

I get a picture of my kid as my wallpaper, and then I open my phone, and I see this little widget that has a photo of something that brings me delight. Am I allowed to see the delight? You can see the delight. Okay.

This one is a photo of my kid at the beach over break, making sort of like a very joyful face. Reaching toward the sky. That is a confirmed delight. Confirmed delight. I'm going to keep filling up this folder with things that bring me delight. And I just think this is like something that I am doing to try to change the emotional state

register with which I use my iPhone. So I have a good sense of what's on your first screen. I would love to know which are the sad apps that are now on the second screen. Well, it's everything that's sort of like, you know, like sort of work related, you know, that tends to like, not a lot of times when I'm getting messages, you know, from a news app that are like, a great thing happened.

Yeah.

on the second screen. I actually have a red flags folder that includes things like TikTok, threads, blue sky. These are not... Wait, these are in a folder that is just marked with a red flag? Yeah, I'll show you. This is my red flag folder.

But I did move stuff to the first page, like the journaling app. Apple has a new journaling app. I've only just started using it, but it is helping me out. I put ChatGPT on my first screen. And I'm also putting things like e-reader apps to read e-books on my screen.

Well, I think this is a great system, and there's actually only one thing that I think that would improve it, but we can actually do it right now. What's that? I should take a picture of us for your delights folder. Aw, let's do it. Let's do it right now. I'm just going to take out a little phone and spin the camera. Smile! All right, that's going in the delights folder. And now every time you open your phone, because hopefully you'll just set this to be the first one, you can remember when we recorded this episode. So there you go. I love that. Yeah.

Now, Kevin, I imagine other people might be setting their tech-related goals for the year. Do we have any tips or words of advice for them? Yeah, I...

I think just be honest with yourself about what is realistic for you. I mean, one thing that has that you've taught me about goals is that they should be something that you could actually realistically achieve. And so if the goal is, you know, never use my cell phone or, you know, never look at social media, that might not be a realistic goal for you. So I think it should be something that is sort of a stretch, but not impossible.

And I also think like as much as you can, try not to make it, you know, try not to be too hard on yourself. Build in some buffer so that if you don't get all the way to your goal, you still feel good about having made it part of the way there. Yeah, I really like that. I think, you know, the one that I would just throw in there is trust your instincts. If there is a piece of software out there that is making you feel bad, just experiment with getting rid of it. You can always download it again later, right? But

Over and over again, when I talk to folks, they sometimes feel embarrassed because, you know, there's maybe some social app that all their friends are using, but they're not on. Trust your instinct. There is something that you know that you don't want to be a part of that, and you're probably right. And so as you're sort of casting about the tech landscape, wondering, you know, what changes you might want to make, I would just listen to those instincts. What do you just not want around you anymore? I promise you, you'll be able to fill it up with something you like better.

I love that. Yeah. All right. So we will check in on our goals this time next year, and hopefully I will just be full of delight. I mean, I am excited for that, and I will have found something to do besides just staring off into space while listening to Marvel Snap gameplay. You will no longer be the embodied version of the YouTube algorithm. Yeah, exactly. Well, that'd be nice.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Hard Fork is produced by... Can you do that without jostling the thing again? Jostling it is part of my creative process. Okay. Hard Fork is produced by Davis Land and Rachel Cohn. We had help this week from Kate Lepresti. We're edited by Jen Poyan. This episode was fact-checked by Caitlin Love. Today's show was engineered by Daniel Ramirez. Original music by Marion Lozano, Pat McCusker, Rowan Nemisto, and Dan Powell.

Our audience editor is Nelga Lokely. Video production by Ryan Manning and Dylan Bergeson. If you haven't already, check us out on YouTube at youtube.com slash hardfork. Special thanks to Paula Schumann, Hui-Wing Tam, and Jeffrey Miranda. You can email us at hardfork at nytimes.com. Let's hear those resolutions. And don't send us a text if you're an Android user. We really don't want to hear it. Kevin. All right.

Earning your degree online doesn't mean you have to go about it alone. At Capella University, we're here to support you when you're ready. From enrollment counselors who get to know you and your goals, to academic coaches who can help you form a plan to stay on track. We care about your success and are dedicated to helping you pursue your goals.

Going back to school is a big step, but having support at every step of your academic journey can make a big difference. Imagine your future differently at capella.edu.