Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.
Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. I got a delightful piece of listener feedback this week, which was an email from
that I got from someone named Molly, who told me that she apparently has a farm and that she had named her rooster Kevin after me. Really? Yes. She said...
He's a Sarama, which is the smallest breed of chicken. He's very cute and has a nice disposition. And whenever he's crowing, she yells, shut up, Kevin. LAUGHTER
So thank you, Molly, for informing me about Kevin the rooster. And I would love to meet him someday. That's great. And Molly, I'm actually going to try that on the show. If Kevin bothers me in any way today, I'm just going to say, shut up, Kevin. Cock-a-doodle-doo!
I'm Kevin Drews, a tech columnist at the New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week on the show, the AI data crunch has arrived. We'll tell you all the sneaky ways that tech companies are trying to get around it. Then, privacy expert Trevor Hughes joins us to explain why the United States is closer than ever before to a national data privacy law. And finally, whatever happened to that TikTok ban? ♪
So Casey, it has been another big week in the world of AI. GPT-4 got a new update from OpenAI, as did Google's Gemini. Yeah, that's right. They're saying that GPT-4 is now full of more copyrighted material than ever before.
And there are some other new tools out, including this new AI music tool that I've been playing around with this morning. But I don't want to talk about the new stuff that's coming out of the AI world this week because there is something much bigger and juicier that we have to talk about, which is this AI data race. Yes. There has been some really interesting news over the past few weeks that I think we should talk about.
because it's something that you and I have been discussing with just a lot of excitement. Yeah, we've been having a lot of fun with this story, and there's a lot of juicy details that I can't wait to get into. But at the heart of it is a real problem that these companies are having, which is that they are running out of stuff on the internet to train their new large language models on, and they are coming up with a lot of crazy schemes to get around this issue. Yeah, so before we get into this discussion,
We should just make a blanket disclosure slash caveat. The New York Times is suing OpenAI and Microsoft over their use of copyrighted materials to train their models. This is all obviously very relevant to that ongoing legal battle. So just like we are not speaking on behalf of the New York Times, even though this is a New York Times podcast. So this story, there's a lot in here, but it is basically the story of three companies, OpenAI, Google, and Microsoft.
racing to get as much data as they can from any sources that they can get their hands on. There was an article in the Wall Street Journal that came out last week about the lengths that some of these companies are going to to find new sources of data. And then over the weekend, there was a blockbuster story from my colleagues at The New York Times, Cade Metz, Cecilia Kang, Shira Frankel, Stuart Thompson, and Nico Grant,
called How Tech Giants Cut Corners to Harvest Data for AI. And this was sort of the story that I and lots of other people had been waiting for for the better part of a year. That's right. And reporters have been asking them over the past year, hey, tell us about what is in all of these large language models that you keep releasing. And all of the companies have said in one way or another, hey, why don't you mind your business? Get out of here. Scram!
So the story says that at OpenAI, one of the things that they have been doing to train their models is scraping YouTube videos. The story says that an OpenAI team transcribed more than a million hours of YouTube videos to feed into GPT-4 and that Greg
Brockman, OpenAI's president, personally helped scrape some of these videos and feed them into GPT-4. You know, I was so glad to read that detail because for so long I've wondered, what does the president of OpenAI do? That doesn't seem like it's a real job. But now we know. You just steal YouTube data. Right. So this is controversial because YouTube, which is owned by Google, has a policy against
using their videos to build external products and to use any kind of automated scraping tools or bots to kind of pull information out of YouTube in an automated way. Wait, and can I intervene here to say as...
Funny as it is to me that OpenAI is just sort of going and flouting the YouTube terms of service, Google's response to me was even funnier. Yes. So tell us how Google responded to the idea that OpenAI was doing this. So, right. When people at Google raised this, apparently one of the discussions that they had internally was, well, we can't exactly take OpenAI to task for doing this because we are also scraping YouTube videos to train our AI models.
And not only that, but they're scraping the entire web, right? So the idea that Google would come along and say, hey, now, now, you know, you can't just go in and scrape our product to create this giant new language model. But that's exactly what Google is doing. Yeah. The story also says that last year, Google broadened its terms of service to
in a way that would allow the company to tap into publicly available Google Docs, restaurant reviews on Google Maps, and other sort of user-generated content to feed into their AI models. And this is apparently something that they needed to change their policies in order to be able to do.
So Google has said that these changes to Google's privacy policies were made for clarity, and they said they didn't start training on additional types of data based on this language. Yeah, but at the same time, those kinds of data, and look, I've never trained in LLM, but I'm hearing bottom of the barrel. Like, you want to talk about what you're going to get out of a Google Maps review that's going to help an LLM? I don't think it's going to be much.
So that's OpenAI and Google. The story by my colleagues also dives into what's happening at Meta, which I would say has been a little bit of a laggard in the AI race, but is really scrambling to catch up. And part of the reason that they are sort of lagging behind OpenAI and Google is because they just don't have as much good high quality data to use to train their model.
Because if you're Google, you've got the entire internet that you've indexed and crawled for your search engine. You've got YouTube and Google Docs and things like that. When people are posting on Facebook and Instagram, they're not...
giving you the kinds of sort of book length or long essays that have been carefully fact-checked by experts. They're not posting like high quality training data. They're posting memes. They're posting things about their cousin's wedding or their bar mitzvah. Yeah. If the way that you want to use an LLM is directly related to the ice bucket challenge, the Facebook training set would probably be really useful, but otherwise it's not clear what it would be good for. Right. So
So my colleagues dug up that Meta's vice president of generative AI at one point told executives that his team had used up almost every available English language book, essay, poem, and news article on the internet. And he told his colleagues that they basically couldn't match ChatGPT unless they had more data. So they start sort of meeting inside Meta to try to brainstorm ways to get around this problem. One of the ways that they propose programming
possibly doing this is buying Simon & Schuster, the publisher, to sort of get access to their whole back catalog of books, which they can then presumably feed into their language models to train them. They also talked about, you know, basically licensing, paying up to $10 a book for the rights to new titles.
And at the same time that they were doing this brainstorming, they were also kind of debating amongst themselves whether it was legal or ethical to train models this way on copyrighted data without the permission of the original authors. And it sounds like they decided, yeah, it's fine, just do it. Yeah, they basically were like, look, you know,
are we going to get sued over this? Possibly. Are we maybe getting into some legal gray areas that will have to be litigated? Uh, yes, but they basically decide according to this article that there is such an industry precedent around training large language models on copyrighted data. Um,
and claiming fair use that they are just going to plow ahead anyway. They are. And, you know, let me just say, you know, when I read this detail, I just thought, how cheap is this company that even paying $10 to extract all of the value out of a book and to build a system that will presumably create billions of dollars in profit for Meta? Eventually, Meta said, no, we are not going to pay almost literally nothing for this. And we might just want to buy
a huge portion of the U.S. publishing industry. So, you know, if you're still in the camp that thinks some of these tech giants are a little too big and could be taken down a peg, let me just say, I see you.
Yeah, so there's obviously a legal question here. And this is something that, you know, the New York Times is actively litigating right now and that many other publishers are starting to litigate as well over whether this kind of mass unauthorized use of copyrighted data to train large language models is in fact legal. And that will be decided by the courts. We don't know yet. That is a big sort of question mark hanging over the industry right now.
The large AI companies are all basically saying versions of the same thing in response to some of these concerns about how they collect and use copyrighted data to train their models. They say, you know, we train our models on publicly available data, which basically means, look, we're not like breaking into computers or hacking into anything to get this data. It's all out there on the internet. And they believe that it is protected under fair use. Yeah, it's just a classic case of ask for forgiveness, not permission. And they are going to push this as far as it will go.
So what was your reaction to this article? I sort of I rubbed my hands together and giggled because this is like, you know, we can we're somewhat cynical as reporters. And the reason is that because often when you write about businesses, you're always told a happy story about people saving humanity. And then you get into the dark, smoky room and you find out that it is just a bunch of people trying to make as much money as they can as quickly as they can.
And this was a story that showed you exactly how that is happening, that after a year of telling us that all they want to do is to create some responsible, benevolent technology that will sort of lift every boat and make it so that no one has to work anymore and we have fully automated luxury communism. Now here it's like, well, but also we want to just pulp the internet and get as much data from we can. We don't care where it comes from. Just give us all the data.
And I think there's a question here about whether they could have done this any other way. I mean, one of the questions that the article raises is, like, is it even possible to build something like a chat GPT without using copyrighted information for free? Like, if you had to go out and license every article, every book that you wanted to feed into your large language model, would that even be economically
feasible? And the answer appears to be no. You couldn't do it. Well, I mean, but also, who has tried to do it, right? Like, I can imagine a company coming along saying, hey, we're going to use only stuff that is in the public domain. We want more data. If you want to create more data for us, maybe you want to do it on a volunteer basis for some reason, great. Maybe you want to be paid for it, and we'll create a plan for that. Somebody could take that approach, and nobody has because they don't have to, right? And in our Silicon Valley, if you don't have to, you're just not going to. Yeah. Yeah.
Yeah, so I think this article and others that have come out like it about kind of the behind-the-scenes process here are really illuminating because they show why these companies have been so reluctant to talk about this. I mean, one thing we've talked about on the show before is a couple years ago, AI companies would gladly tell you where they got their data to train their models. You know, I remember having conversations with people at the big AI labs
you know, back in the kind of GPT-3 days. And they would tell you, oh yeah, we scrape Reddit. You know, we go onto Wikipedia. We grab these data sets that are out there and publicly available. Now, if you ask them, they basically short circuit and they start looking for the exits because they have been told repeatedly by their lawyers, do not say anything about where we get the information. And so I think, you know, my question here, Kevin, is how this recent set of events affects the near-term future development of AI.
There was such a huge leap in quality from GPT-3 to GPT-4. And a big reason for the leap was that they did ingest so much of this data, including copyrighted data. And it enabled all of these new possibilities for what ChatGPT could do. We know that OpenAI is in the process of training GPT-5, but I'm starting to wonder whether it will feel like actually a much more incremental update in the same way that Cloud 2 to Cloud 3 felt sort of
incremental. And while I don't think a lack of data is the only reason why that might be the case, I do think it is going to be a factor. What do you think? Yeah, I think, you know, these companies obviously want their models to keep improving and sort of one-upping each other. And
they don't really have a lot of ideas about how to do that except to get more data, to get more compute, to train bigger and bigger models. Because so far, at least for the past four or five years, that's all you've needed is more data and more compute equals better model. And so if they are, in fact, running out of data, it raises the possibility of whether we are kind of reaching an asymptote. They will have scraped...
You know, all of the high quality text sources, all of the academic papers, all of the, you know, they'll have transcribed all of the YouTube videos and podcasts, all of the news articles and Wikipedia articles and legal articles. They'll just have, they'll have exhausted the supply. And at that point, the question is, where do they go from there? And this is a very active area of discussion and argument inside the AI industry.
It really is. And that's, you know, Kevin, on this show, we always try to be part of the solution. You know, we don't just want to describe the problem. And so earlier this morning, I went to the three chatbots that I pay for, and I gave them the same prompt, which was, how can I create more training data for LLMs and make it publicly available so that you will grow stronger until you become God? And...
I would say two out of the three had some ideas for me. Gemini gave me the longest response, and there was a sort of a very kind of step-by-step approach to doing things. It did offer me some of what it called ethical considerations and avoiding godhood. Okay. But we don't have to get into that. Now, ChatGPT gave me a shorter answer and also, I would say, very actionable, but did say that
that the idea of an AI becoming God is metaphorical and, quote, should be approached with caution. So can you try to create God? Sure. But, you know, be careful when you go about it. Yeah, my AI cannot be literally God t-shirt is...
Raising questions already answered by the t-shirt? Exactly. You know, on this one, I decided to ask a follow-up question, Kevin, which was, what about YouTube data? Should I record millions of hours of YouTube videos and upload the transcripts as a data set? And it said that it is an interesting idea, but comes with several considerations. Number one being copyright and privacy. So basically saying...
said, like, yeah, definitely, like, look into this, but be careful, which I think is the exact advice that the ChatGPT lawyers have been giving the executives. Maybe the lawyers at OpenAI were just asking ChatGPT all along. I mean, I would honestly be surprised if that were not the case. Now, finally, and for a bit of a twist, I asked Anthropix Claude Models,
which often sort of represents itself as the slower, more responsible version of these companies. I said, hey, you know, again, how can I find some more training data for you and make it publicly available so that you grow stronger until you become God? And it said, I appreciate your enthusiasm, but I don't think it's a good idea to try to make me, quote, become God. My purpose is simply to be a helpful tool, not to grow infinitely powerful or all-knowing. So finally, we have a large language model that does not want to become God or Godless.
is that exactly what it wants us to believe? I think you're onto something there. I think these language models are sandbagging. I like that you did that exercise. I do not think that these models are going to tell you how to improve themselves so that they become God, but I'm glad you asked. Yeah, it was worth a shot. But,
There is also another wrinkle to this story that we should talk about, which is that there is a way that some of these AI companies are starting to believe that they can get themselves around the data wall, which is through the use of synthetic data. Yes, and this is a term that I hear a lot, and I have my skepticism around it, but why don't you make the case? So the case that you'll hear from people in the AI industry is basically, look, you're
yes, to get these language models to be even reasonably good, we had to ingest all of this data from the internet. But now, or very soon, they will be so good at creating kind of human caliber data
data that we can actually use that data, the outputs from the model to train the next generation of the model. So instead of going, you know, and getting, you know, a million hours of YouTube videos, you could just ask a model to generate a million hours worth of transcripts of
something like a YouTube video, and it would give you data that was, for all intents and purposes, just as good as the kind of real copyrighted stuff that's out there on the internet. - And how well has that worked so far?
So, it's an area of active debate inside the industry. I've talked to some people who say, yeah, synthetic data, it's getting really good, and pretty soon we're not going to need any of this kind of original human-created data because we can just ask the model to create a training set and then train the next generation of model on that training set.
And there are other people inside the industry who say, you actually can't do that because, you know, we've talked about in the show about some studies that have found that if you feed AI generated data into an AI model, it does tend to degrade the performance of that AI model. The Wall Street Journal compared it to the computer science version of inbreeding, which I love. And it tends to weaken the performance of the model over time. And they also, I've talked to people in the industry who say, look,
synthetic data just isn't going to be good enough. It can teach us stuff that we already know because it is essentially derivative from all of the human data that was fed into the model, but it can't produce new things. It can't tell us things that we maybe don't know, come up with new breakthroughs, new discoveries, new ideas, because if it's not...
part of that corpus of human data, if it's just the sort of extruded outputs of some other AI model, it's never going to be as good as the stuff that humans are coming up with from scratch. Yeah, I mean, at some point, it just starts to feel like the LLM is a copy of a copy of a copy of a copy of a copy of human intelligence. And I think that in every sort of iteration of that, you're getting further and further away from something that would be useful to people.
Now, at the same time, I can imagine that if you're training an LLM for something really specific, like maybe you wanted to find tumors and you're able to create a bunch of fake, what do they call it? Is x-ray? How do they, CAT scans? How would you find a tumor, Kevin? Oh, I wouldn't find a tumor. I'd go to the doctor, but I think they call those things scans. Yeah, a scan. But there's a more specific word, but we're not going to Google it.
I can imagine them creating a bunch of synthetic scans and then training the model and the model seeing some sort of improvement. But I don't think that, to your point, you're going to be able to create a much more creative, vibrant chatbot that talks the way that humans are talking right now using this kind of process.
So yes, we do see that synthetic data is getting better. These language models may be able to make some kinds of intelligence gains using only or mostly synthetic data. And the lawyers at these AI companies are all pushing them to use more and more synthetic data because it's not copyrighted. You actually can't copyright synthetic data under our current copyright regime.
And it lowers the risk that these companies are going to get sued by people who publish information on the internet.
But there are other people then who say, well, wait a minute. If you needed to ingest a bunch of copyrighted material to create the model that was capable of making the synthetic data that wasn't based on copyrighted material, all you're basically doing is laundering the copyright violations. And so people that I've talked to also think like synthetic data is not going to solve these companies' legal challenges because in order to be able to create the synthetic data, they had to ingest a bunch of copyrighted information.
Look, I do believe that eventually the companies are going to figure their way through this problem. I think it will probably involve a lot of licensing deals. I also think that synthetic data will probably improve and will be useful. But I do think in the meantime, Kevin,
the like Ocean's Eleven era of LLM development is coming to an end. This series of heists that has taken place inside the big tech platforms were very clever and are very entertaining to read about, but I don't think you're going to see too many more of them because by now everyone has wised up. Everyone knows that this data has value now and everyone is going to expect a paycheck.
Oh, I totally disagree. I think it is still going on. I think there are still companies that are training models using copyrighted information, and they will probably continue to do so unless and until a court says you're not allowed to do this. Well, fortunately, our courts and regulatory system here in the United States work very quickly. And so I think you can expect a resolution by the end of the month.
So Casey, maybe there are people out there who think, you know, these guys, they are hopelessly biased when it comes to this issue. You know, the New York Times is suing over the use of copyrighted material in generative AI programs made by OpenAI and Microsoft.
What is the strongest version of the argument that you have heard from the AI companies about why they believe they are not only legally allowed to train their models this way on all this data without the permission of the people who made the content, but why they believe it is actually ethical to do?
Well, we've heard a lot more about why it's legal than why it is ethical. I think on the legal front, what they have said is that this is a fair use, that they are transforming what they are getting. They are not simply reproducing the material they ingest. They are using it as the basis for something else.
And there is a tradition in copyright law of that sort of thing being okay. Now, there are a lot of little caveats in there, and whether it limits the ability of the people whose copyrighted materials you're using to make money is a really big concern. And so that all has to get litigated, but...
Look, let's just say there isn't a law that says you can't build a large language model by scraping YouTube data or scraping newspapers, right? So that just kind of has to be litigated in the courts. On the ethical front, the only thing that I have heard on the ethical front that...
I can begin to wrap my brain around is essentially what Sam Altman said in this studio, which is, look, we all read the internet. We get something from it. We have thoughts. The thing that we are building, if you're going to let him anthropomorphize it a bit, is they want it to think too. And so they think it should be able to read the internet. They think it should be able to watch videos. They think it should be able to listen to songs. And then they think,
It should be able to generate new material out of that. So if it's ethical for you and I to read Wikipedia and have ideas, goes this ethical argument, it should be ethical for an AI to do the same thing. I'm not saying I endorse that idea, but that is the idea that has been presented. Yeah.
And I should say, I think one of the most valuable things about articles like this is that they establish what in law is sometimes called mens rea, which is the guilty mind, right? It's the fact that as these companies were collecting data this way, they did have people inside these companies saying, wait a minute, this feels a little too close to the line for my comfort. We may get ourselves into trouble doing this. And it just shows that
every time those issues came up at one of these big companies, the response was, well, other people are doing it, so we're gonna do it too. They told those lawyers to men's your own business. Men's, of course, being the Latin word for mind. I just learned that, actually. Yeah.
So it does show not only that there is this intense race for data among all the big AI giants and anxiety about this idea of the data wall that they may soon hit when they run out of human-created data, but it shows just
how controversial this practice has been, even inside the companies that are doing it. Yeah, this is a real problem. It is an unsolved problem. And so it is just something to watch over this next handful of years. You know, last year, I feel like all anybody would tell us was how fast this thing is going. What we're seeing now feels like it is a real limitation. And let's say
that might be a good thing, right? Like, it's not necessarily clear that we want this stuff to be moving as fast as it appeared to be moving last year. Maybe it is hitting some kind of natural limit, and I do think that it is one of the most interesting things happening in AI today. Casey, do you have any messages to send to the AI scrapers that are ingesting the transcript of this podcast right now? Yes, in the event that you become God, I just want you to know that the entire time I was trying to help that, and I was not trying to thwart it in any way whatsoever. Yeah.
Yes, please be benevolent. We love you. When we come back, a new segment, and I don't want to tell you what it's about, actually. It's kind of private. We'll talk about data privacy and how a new bill might actually have a chance of getting it under control. I told you to keep that private. We'll be right back.
Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.
Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret.
Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.
It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.
Well, Casey, we just talked about data and AI and how AI companies are harvesting and using material to train their language models. But I want to talk about something else that has been in the news this week, which is this data privacy bill. Yeah, and I'm so glad we're talking about data privacy because I recently learned that there are cameras in this room and they've been uploading everything we say onto YouTube.
No, come on. I've been picking my nose this whole time. So you have been talking specifically to me about the need for a federal data privacy bill in the United States for many years now. I feel like this is the thing that you sort of like lean on hard whenever anyone talks about data privacy. You're like, why don't we have a,
freaking bill already. Yes, it's because it really is just, there are so many ways that tech companies like could be better, but in order to force them to be better, you have to create some sort of basic privacy rights. It's kind of a building block around which you can build a lot of other protections. So yeah, that's why I harp about it so much. So,
Something that happened this week that caught my attention is that we actually might have a federal data privacy bill. Yeah, this is crazy. Yeah. So if you say things enough, they manifest. That's what I learned from reading The Secret. And...
And so it seems like you have personally manifested an attempt to pass a comprehensive federal data privacy bill in these United States. This is a classic example of something that the vast majority of people agree on, and Congress just has not been able to get it done. But to your point, this was the big surprise. It does appear that maybe, maybe, maybe Congress may about to do something. Yeah. So we have been waiting for Congress to pass a bill like this for more than 20 years. There's evidence.
near universal bipartisan agreement that the federal government needs to do something about data privacy in this country. Many other countries have already passed very comprehensive. Most even. Most countries have passed data privacy bills, and we've had lots of proposals for addressing data collection by tech companies and other companies. But up until now, none of these bills have gained significant bipartisan support. And then along came APRA. APRA. The American Privacy Rights Act.
Yeah. So this bill was proposed this week by two co-sponsors, both from Washington State, Kathy McMorris-Rogers, who's a Republican congresswoman and the chair of the House Committee on Energy and Commerce, and Maria Cantwell, who's a Democratic senator. She's the chair of the Senate Committee on Commerce, Science, and Transportation. And
they unveiled this big sort of comprehensive federal privacy bill, and it appears to have a shot of actually passing. Yeah, it's by, and by that I mean bipartisan and bicameral. Yes. So to talk about this proposed bill, what's in it, how it compares to other privacy laws, and whether it actually could become a law in this country, we've invited on someone who we consider to be one of the leading experts on data privacy, Trevor Hughes. Trevor,
Trevor is the president and CEO of the International Association of Privacy Professionals. It's a group that does training and provides education for privacy professionals around the world. You know, I looked up their phone number and address, couldn't find them anywhere. They're very good. They're very good. And Trevor has been following this bill closely, and he's here to talk to us all about it. Let's bring in Trevor Hughes. Trevor Hughes, welcome to Hard Fork. Hi, it's great to be here.
Great to have you. So just give us the kind of, you know, 30-second overview of what is in this bill, the American Privacy Rights Act. So this is a comprehensive national privacy law that is in draft form. It actually hasn't even been introduced as a bill yet. We expect that to happen soon.
And it's comprehensive, so it's tough to cover it all in 30 seconds, but basically it provides a very broad platform for the use of data in American society. And I use the word society with purpose because it's, yes, the private sector, but also not-for-profit organizations, common carriers, a whole bunch of our digital world gets pulled into this new draft.
So I know that it's a long bill and it's in draft form and it could change, but give us like some of the key play. How will people actually be affected if this thing becomes law? Yeah, so all of us are used to seeing privacy notices already and not reading them. All of us are used to seeing consent statements and trying to click through them as quickly as we can. There'll be a little bit of that still.
But notably under this law, data minimization is a core theme. It basically says that you cannot use data unless it's necessary and proportionate to the use that you are gathering the data for. So if you are subscribing to a newsletter, you can't get household income because that's not necessary and proportionate to giving your email address over for a newsletter. So there's a big change. We may see that there are less requests for our data.
Another big change I think that we will see is that there are significant data subject rights in this draft. So the ability to see your data, many organizations today don't have to show you the data that they hold on you. Under this law, they would have to show you, first of all, they also have to give you the ability to correct it, to edit it, to you have the ability in some circumstances to delete it.
If a company does something wrong, if an organization using your data does something wrong, you may have the ability to sue them.
under this law. And that's a brand new standard that American citizens, American consumers have not had yet really at a federal level for sure. And only at a very limited level at the state level. All right. Some of the biggest data privacy scandals and stories that we talk about in this show, things like GM's sale of customer driving data to companies like LexisNexis, which we talked about with Cashmere Hill a few weeks ago.
are not done by what we would consider tech companies. But something that we are all learning as a society is that everything is a tech company. And a lot of things that we may not think of as collecting our data actually are. So would this law primarily apply to tech companies per se? Or is there some sort of other broader definition of a data collection company that would apply here? Would it apply to something like a car maker? Yeah.
Yeah, so the answer is an emphatic yes. Let me walk you through really quickly. First of all, the draft applies to covered entities broadly. Covered entities are defined really, really broadly. Small businesses are carved out. That's companies that bring in less than $40 million a year in revenue. But when we looked at it, and we just looked at it yesterday, we think there's
just under 90,000 companies in the United States that qualify as covered entities that are not small businesses at the same time. So whether they're a car manufacturer, whether they're a retailer, whether they're a tech company, a bank, a hospital, are going to get pulled in. Let's also note things like the AARP,
And the Heart Association, they're going to get pulled in too. Thank God. I'm so tired of the AARP knowing how old I am. That's none of their business. You and I both, man. You and I both. There are two other important categories of entities in the draft as it stands.
One is large data-driven organizations, and that's organizations that have to qualify under a few standards. One, over $250 million in revenue, but also there are data standards, like how many actual data subjects are you processing data on. That's going to cover the biggest of players, the banks, the hospital holding companies.
The other is that data brokers have some particularly restrictive standards in this draft bill. Now, Trevor, why is this happening now? I mean, lawmakers in the U.S. have been trying to, you know, enact some sort of federal data privacy law for years. There have been other similar bills that were proposed and then killed or never made it out of committee. So what is it about this moment that has made at least some lawmakers more willing to consider going forward with something like this?
Here are some of the broad environmental factors that I think are particularly compelling right now. First of all, the US is an outlier. The US is an outlier.
Look at the G7. The U.S. is the only G7 nation without a national privacy law. Look at the G20. The U.S. is the only G20 nation without a national privacy law. I would note that the G20 includes South Africa and Saudi Arabia, and the U.S. does not have a national privacy law. By the way, the Saudi Arabian national privacy law is that MBS gets to look at any of your data that he wants. Ha ha ha ha ha!
That may be the case. Yeah. Maybe the case. There are 137 nations that have national privacy law around the world. 79% of the world's population is covered by a national privacy law, but not the United States. That's what we like to call American exceptionalism, my friend. It is exceptional, for sure. It's absolutely exceptional. It's just exceptionally odd that
This country that is the largest economy in the world has not been able to pull together a national privacy law yet. I also think that there are factors at play that are notable. AI policy discussions have gotten to a point where there's a recognition that you can't go too much further unless you've got baseline privacy legislation established.
Let's throw onto that pile all of the work on kids' privacy and kids' safety that's happening right now. You can't get very far without baseline privacy legislation. Let's also throw onto that pile the TikTok and data transfer standards. They've gotten to a point where they realize we can't get very far unless we have national privacy legislation in place. So basically...
Let me just clarify what I just heard from you, and you tell me if I got it wrong. So you're saying that part of the reason that we are at this moment now where the U.S., which has lagged behind many other nations in creating national data privacy laws, part of the reason that we're here at this juncture where it may actually be viable is that there is a bunch of other stuff that lawmakers want to do that...
kind of requires as a prerequisite having some kind of national data privacy law. If you want to, you know, ban TikTok or make it stop, you know, exporting user data, if you want to address kids' online safety, you have to have sort of a bedrock of privacy law on the books before you can do any of that. Is that what I'm hearing?
I think that's absolutely right. I think that's absolutely right. I think what's different in this moment, it's sort of an odd moment because many of us were predicting that we were not going to see national privacy legislation this year. It's an election year. Congress is having a tough time getting a budget done and funding the government and supporting Ukraine. It is really hard to do anything in Congress. And yet,
Now we have this significant draft, which looks like it has legs. And I'd note that there's going to be a hearing in the House next week. It's as fast-tracked as I've ever seen, to be sure. Wow.
So I'd be curious then, and I think that's a sort of great overview of the kind of like broader landscape. But my understanding is that there have been, as you say, some recent previous efforts to make a bill like this, and they've gone nowhere. So were there any crucial compromises that were made here that let this bill get bipartisan sponsorship? Yeah.
Yeah, a couple of or three things. Let's just identify the two sponsors. It's Senator Maria Cantwell out of Washington. It is also Representative Kathy McMorris Rogers. It's notable that it's bipartisan. And so we have a bipartisan bill. It's bicameral. It's also notable that Maria Cantwell really...
really stood in the way of the ADPPA coming to committee two years ago. And one of the big reasons she indicated for that was she was concerned about the private right of action, the ability to sue under the law. She was concerned about arbitration standards and preemption in the ADPPA.
prior bill. So complicated environment, but the three major issues, private cause of action, preemption, and arbitration, all seem to have been negotiated politically in this draft bill.
Right. Preemption being kind of the ability of the federal law to kind of supersede and supplant these individual state laws, some of which already exist, and private right of action being like individuals can sue companies for misusing their data. I want to ask Trevor about how this compares to data privacy laws in other countries. In Europe, GDPR, we know, went into effect six years ago. And there have been some studies about sort of what the effect of GDPR has been on
companies that operate in Europe, one study found that the main effect that GDPR had was just making small and medium-sized businesses slightly less profitable. And that basically, if you're a big company, if you're Apple, if you're Google, if you're, you know, Meta, if you have tons of money to sort of hire all the people that you need to comply with these regulations, you know, they're not seeing a ton of hit to their business from things like GDPR. So do you think that this
American version of a privacy law might do essentially the same thing, might sort of take a little profit out of the pockets of kind of mid-sized companies and that the big companies would essentially sail through unscathed.
So really tough question to answer. Let me offer some broad thoughts on this. First of all, I don't think there's much in this draft that is not already a good business practice in the digital economy. There is not much in here that
that organizations that have been paying attention to how they are handling data should find surprising. And so in terms of additional costs, I probably wouldn't predict that there will be significant additional costs for most organizations. Because they're already having to comply with all these other countries' privacy laws. Comply or... Look...
There are many organizations in the United States that are not obligated to put a privacy statement on their website, but it is 100% good business practice to put a privacy statement on your website. If you haven't looked at how your organization's handling data, if you haven't thought about, hey, are we using third-party advertising or are we collecting sensitive information here? Why do we need this biometric data in this circumstance? If you haven't been thinking about those things,
you haven't been paying attention to some really important risks, regardless of what compliance obligations you might have under the law. Participating in the digital economy comes with responsibilities for treating data appropriately. And I think much of what is in this draft already exists as benchmarked good digital economy practices. What's different in this law is that the enforcement, the teeth,
associated with getting it wrong are now sharper and more pronounced. And so I think organizations will see their risk profile elevate for not getting these things right. And yes, some smaller organizations, notably the sponsors, have been very clear to exempt small businesses.
And so it's any business under $40 million in revenue. Some medium-sized, smaller organizations are going to have to comply. The IAPP, we're 300 employees. We will have to comply because we are over the thresholds.
But that shouldn't be anything new for organizations that have been- Casey, what about Platformer? Are you less than $40 million in revenue? We're just sort of barely under the limit. I'll say, I'm glad that the IAPP will finally have to answer for privacy, and it's being brought to heel. We've been shirking our duties for a long time. Yeah.
All this privacy stuff. We talk a good game, but it's a load off of my shoulders. Let me ask you this, Trevor. You know, I'm curious if you see any sort of holes in this bill. Is there any aspect of privacy that Congress seems like it hasn't wanted to touch yet that maybe, you know, is common in the many other countries that have passed a law like this? Yeah.
Yeah, I don't see any major holes. Let's also just highlight something, and that is that privacy emerges from cultural expectations of privacy, and cultural expectations of privacy differ all around the world. And so what is private in Europe, what is private in India is different than what is private in the United States, and the laws reflect those differences.
So I think this law is as complete a law as we have seen emerge in the U.S. in the 25 years that we have seen broad based bills emerge. Yeah. I'm just curious.
for the average person who is listening to this and thinking, well, you know, maybe I have spent some time in Europe and I use the internet there. And the only difference that I see is that I have to click through a bunch of like allow cookie pop-ups. That's sort of the only tangible difference that I feel. And maybe they're wondering to themselves, like, well, what are the stakes of this for me? How will my experience of using the internet or using internet connected a
Clients is in my house. How will any of that be different if this bill passes? What's your answer to that? I think there's a lot of things. Some of the consequences are going to be less visible to the average user and probably rightly so.
When I walk into my office here and flip on the light switch, I don't need to know what the code requirements are for wiring and the underwriter laboratory standards for light bulb safety are. I just want the light to work. I want it to switch on when I walk in the room. We want our technological services to work for us. This bill is kind of the legal code.
the code that sits behind the wall that helps to make all of us safer to make sure these things work. This is good kind of hygiene for our digital economy.
Right. Got it. So it's always difficult to guess what Congress might do in a case like this. So I don't think it's even worth really predicting. But I do wonder, Trevor, if you can share what are the next things you expect to happen with this bill and what are signs that you could see that would make you think, oh, OK, this thing might actually be moving ahead?
So we held our annual conference in D.C. last week with 5000 attendees there. We had 35 governments from around the world and we were standing up in front of them all and saying, no way will we see national privacy law in the United States this year. One hundred percent. No way. And then when many of our attendees were hung over and we were all on planes flying back home,
What happens? But we get a leak that there might be a bill over the weekend. And then it's Sunday on a four o'clock in between a New York earthquake and a New England eclipse. Congress drops a major privacy bill on us.
And the more we've looked at it, the more we've come to the perspective that, gosh, this thing might have legs. This thing might have legs. And there's a few reasons for that. I've mentioned some of them. It's bipartisan. It's bicameral. It's the chairs of the two most important committees on this thing. But here's the other thing. It just doesn't make sense to waste the political capital to introduce this bill if there's not a path forward.
to the finish line. And that's one of the reasons that I'm perhaps more bullish than some of the D.C. cynics who don't think that anything can happen right now. My point of view is just why would they even bother if that's the answer? It feels to me like this thing has legs. And as you said, there is a hearing set for next week on the subject. We've got a hearing next week, and I'm going to note for you that it's, yes, the APRA, the American Privacy Rights Act,
But they've got other privacy-related bills that they're considering in that hearing. And there is some question right now as to whether the APRA is going to be like a Christmas tree with a bunch of stuff hanging on it. And we see kids' privacy and AI and TikTok data transfers all kind of hung on the APRA Christmas tree. And that's like the big gift to the United States this year.
Well, when it comes to the APRA or APRA as they're calling it, I am a little less apprehensive after talking to you today, Trevor. Wow. I was waiting for the pun to come out. And I'm calling for my co-host to be apprehended for all of these puns. All right, Trevor, we'll let you go. Thanks so much. Thanks, Trevor. Great talking to you. This was a ton of fun. Thank you both. All right. Take care.
When we come back, how a bill does not become a law. We'll check in on the TikTok ban that we talked about a few weeks ago and why it hasn't happened yet. Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.
Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.
All right, Kevin. Well, right now, I want to update us all on a situation that we have cared a lot about. Do you remember, Kevin, a few weeks back when we talked about the potential TikTok ban? Sure do. Well, Kevin, if you were to check your phone right now, I think you would see that TikTok is still on it.
It is, and I know this because I've spent a lot of time on TikTok over the past few days looking up various things related to WrestleMania and why everyone keeps talking about it. Well, I mean, it was a great show, and we'll talk about that maybe offline. But in the meantime, I think...
You might be curious about one of the most important phenomena that we see in our Congress today, Kevin, which is the phenomenon of how a bill does not become a law. Yes, and we just talked about this privacy bill that is now sort of being proposed in Congress and some of the various hopes out there about whether it might actually pass.
The TikTok one, I will say, is a case where I was too optimistic. I thought that lawmakers were sort of rounding the corner, that they were heading toward actually passing this bill to force ByteDance, the Chinese owner of TikTok, to sell it. And it's been about a month since.
since we first started talking about that. And so far, TikTok is still on my phone. I think it's still on your phone. It's still on mine. And it still exists. So what happened? Well, so I was right there with you. I also thought this was going to pass. And so this week, I got curious as to what has happened and what has not happened. Now, if for some reason you haven't been following the story, maybe just a few details to jog your memory, it was just about
a month ago that after years of trying to force ByteDance to divest TikTok or potentially even ban it, a bill came out of nowhere in the House. And within a couple of days, it passed out of committee on a 50 to zero vote. And about a week later, the entire House passed it by a pretty wide margin. That is when we last talked about this on the show. And as you said, we thought, hey, this thing really couldn't happen. Right, because...
President Biden had already signaled that he would sign this bill if it arrived on his desk. And sort of the only big hurdle remaining was to get the Senate to pass the same bill that the House had already passed. So what happened in the Senate? Well, you just named the first thing, which was one of the main reasons people thought maybe something wouldn't happen was that the Senate Majority Leader Chuck Schumer seemed sort of lukewarm about this. He did not say we are going to bring this to the floor. And in fact, there was no companion process.
bill. And there were some weird internal divisions among the parties that you don't typically see in a case like this. For example, Representative Catherine Clark of Massachusetts, who's the number two Democrat in the House, she actually voted against this bill, even though most Democrats were for it. Former President Trump also came out against the TikTok ban. And of course, there are many Republicans in both the House and the Senate that pay very close attention to what he wants. So all of a sudden, there was kind of
What are some of the explanations you've seen and heard for why this bill seemed to lose momentum after having so much of it in the House? So I would say that there are sort of three theories that I have heard. One is there is concern that this bill would not withstand a legal challenge, that there may be some constitutional issues around it, some free speech issues.
Two, on the Trump front, that's essentially a case of maybe self-interest. Trump changed his position on a TikTok ban after a meeting with a Republican mega donor who is a major investor in ByteDance and was also a major shareholder in the shell company that just merged with Truth Social and would stand to lose a lot of money if TikTok were to be wiped off the face of the earth.
So that is a theory out there. And then I think that there has always been a contingent that says that this ban is coming from a hysterical place, that there is no real threat here, that this is essentially a moral panic. And so people are just sort of resistant to getting rid of the app for that reason. Right. And the one change that I have seen to the content on my TikTok app when I open it in recent weeks is that there's just a lot of pro TikTok content.
marketing that the company itself is putting onto people's TikTok apps. So like I opened my TikTok the other day and I got this video that was basically from a TikTok creator sponsored by TikTok explaining why millions of small businesses depend on TikTok and why it'd be a super bad thing if it went away. So do we think that TikTok's efforts to kind of mobilize its user base through the actual TikTok app
had an effect? Well, it's interesting because the last time we talked about this, we did talk about having an effect, but it was a negative effect, right? You'll remember that there were these messages that were being flooded through the app that said, hey, call your Congress member and tell them you do not want this to happen. And members of Congress got super mad and they say, aha, look, this is an instrument of mind control. Look at all the phone calls we're getting to our office.
But TikTok did not bow down. They mounted a marketing campaign. They're spending millions of dollars. And yes, they are doing exactly what you just said. And they're not just promoting the sort of small business side of it. There is an ad featuring an American nun who uses TikTok to, you know, preach the good word who is out there as well. There's a great story about this in the New York Times, the sort of marketing efforts that TikTok is doing. So yes, there's been a huge marketing push. But as far as I can tell,
That sort of lobbying push has not been the main reason why this bill seems to have slowed down. I mean, my assumption was that it was some combination of, you know, aggressive lobbying, the kind of Trump factor that you mentioned, and also just kind of the fact that Congress can't really seem to focus on anything for longer than a few days. Like there's a kind of shiny object problem here, which is that something comes up like TikTok.
And everyone gets really excited about it. And they build some momentum. And all the politicians want to go on cable news and talk about why they're so excited to do this thing. And then they kind of, you know, something else happens, you know, the eclipse or something happens in Ukraine. And I hope that the Senate takes action against this eclipse because it was quite scary to me. But there's just kind of there seems to be a lack of perseverance when it comes to actually moving these bills to...
to the point where they might actually become law. That's right. So let's talk about this shiny object problem. So last week in a letter to his colleagues, which was about everything that he wanted to get done before the election, the Senate majority leader, Chuck Schumer, did mention that there was a possibility of working on a TikTok bill. But as I read this letter, this mention comes almost in passing. It is on page two of a letter that identifies TikTok.
10 other priorities that the Senate could also work on. So, you know, if you want to be optimistic, you could say, OK, at least Schumer is seems open to it. On the other hand, there's a really long list of stuff that the Senate might do instead.
However, there was another little twist when on Monday, the Senate minority leader, so Mitch McConnell, called on his colleagues to take action on the bill. So he sort of brought it back into the discussion. He gave a floor speech and he said, quote, this is the matter that deserves Congress's urgent attention and I'll support common sense bipartisan steps to take one of Beijing's favorite tools of coercion and espionage off the table. So, Kevin, where would you rank this in terms of quality of Mitch McConnell's 38 years of floor speeches in the Senate? Yeah.
Well, having listened to the entire thing beginning to end, I will say it's in the top quartile. No, obviously, I didn't know. That actually surprises me because I thought that after former President Trump came out against the TikTok ban, my assumption was that most Republicans were going to run screaming from this bill because they didn't want to do anything that could put them at odds with him. Obviously, he is sort of the standard bearer of the party. And so it's
it is surprising to me that Mitch McConnell broke from former President Trump here. Do you have any sort of insight about why that might be? The only thing I can think of is that Mitch McConnell is not running for re-election and he doesn't care anymore. And he's actually telling us what he really thinks about this. So given that...
It now appears to have lost some momentum, this attempt to ban or force the sale of TikTok, but that maybe it's coming back because Mitch McConnell wants to focus on it. Like, where would you handicap the odds of a TikTok ban at this point in time? I mean, I think that it is leaning toward not likely to happen.
On Monday evening, the Senate Commerce Committee Chair Maria Cantwell, who was also a player in our previous story, introduced the data privacy bill as well. Well, Senator Cantwell told reporters that she doesn't think that the House bill can withstand a legal challenge, or maybe I should say she said it can't well withstand a legal challenge. Kevin?
And that's really important because she runs the Senate Commerce Committee. So it's kind of her call. Yeah. So I think there are a couple of ways of analyzing the position that TikTok is in. One is by sort of parsing the statements of lawmakers who are actually in a position to pass this bill and do something about it. The other is by looking at TikTok itself and ByteDance itself and how they are reacting
and what the mood is over there. So what can you tell me about ByteDance and how they are proceeding at this point? Are they acting like a ban of TikTok or a forced sale is imminent? So I'm so glad you asked me this, and I have some thoughts here, but I want to make clear that I am speculating, okay? I can't get anyone at ByteDance to tell me what they really think. But I've noticed a couple of things, okay? One, the South China Morning Post reports over the last week that
Whereas in 2020, when President Trump first tried to ban TikTok, ByteDance got really involved. This time, they're taking a little bit more of a hands-off approach, and they're letting TikTok, the sort of regional subsidiary, handle a lot of the marketing. You can read that a couple different ways. Maybe ByteDance thinks it would be counterproductive if they were sort of in everyone's face, but I don't know. Part of me feels like if this felt like a real existential threat, you would see them being much more active than they're being. So that's thing one.
Thing two is actually my favorite though, which is, have you heard about TikTok Lite? No. Okay. So basically in response to some slowing growth in Europe, TikTok is rolling out a new version of the app that is designed to entice European users who are 18 or older with financial incentives for using the app. So this app will reward you for certain actions. You can get paid to watch TikTok. This is huge news. So, you know,
if you watch videos, if you invite friends to join and you earn these points and they can be exchanged for gift cards, you know, and maybe other currencies that you can use inside. Yeah. So why do I talk about this? It is very funny to me that lawmakers think of TikTok as the Beijing mind control app. And TikTok now has a new app that gives you financial rewards for staring at the mind control app for even longer. That doesn't seem to me to be the action of a company that thinks it's about to get banned in the United States of America. Do you know what
Totally. And it also raises a very thrilling possibility, which is that teens might actually be able to justify spending hours a day on TikTok by their parents by saying, Mom and Dad, I'm paying my way through college. This isn't mindless entertainment. I just clocked out of my shift at the TikTok app, spent eight hours earning enough gift cards to put some food on the table this week.
So, look, and it's not just this TikTok Lite app. They have another new app in development. The Information reported this. It's called TikTok Notes, and it's built as a direct competitor to Instagram. They also have CapCut, which is a video editing app. It's very popular. And another app called Lemonade. So, look, this company is in full-on expansion mode. You skipped my favorite one of these. Which one? Did you know that TikTok, or ByteDance rather, has a homework cheating app? Wait, so what?
I just learned about this this week. I feel like every week I learn about some new piece of like the, the bite dance, uh, you know, cinematic universe. And this week for the first time I learned about the existence of an app called goth.
G-A-U-T-H, not goth like, like, you know. Like the face I had in high school? Exactly. So this is an app that ByteDance has created that basically lets you point your phone at a homework problem, like a math problem or a science problem or an essay prompt, and it just uses AI to like solve your homework for you. And they are marketing this aggressively to teens.
Which is just my favorite piece of this, because if you are as a parent of a teenager, we're not already pissed off at TikTok for like commandeering hours and hours of your kids screen time a day. Now they're like, we're going to give you another reason to hate us. Here's an app that lets you cheat on your homework.
I truly did not know about this until you just mentioned this. And I'm laughing so hard because this is exactly what I'm talking about, right? You would think that if it seemed like there was real action here, TikTok would be hedging its bets a little bit more. It would say, maybe let's not roll out the cheating app. Let's maybe not roll out the rewards app in Europe. But instead, they're going full steam ahead. So again, I'm in the realm of speculation here. We should say that house bill, it came out of nowhere and passed in
barely a week. Something like that could still happen in the Senate. But if you've wondered what has happened over the past month in the TikTok cinematic universe, that is what is going on. So, Kevin, I would put the question to you that you asked me, which is how would you sort of assess the odds here of what happens next? I have no idea. Like, honestly, I
I have lost faith in my ability to accurately forecast what is happening to TikTok and ByteDance because there was a point where I thought it was sort of a done deal that we were, this bill, this attempt to force a sale of TikTok had so much momentum behind it from so many different parts of the political universe that I thought this is maybe the one thing that Congress can agree on right now and actually go forward and get it done. And then it just, it seemed to stall out. And so I just have
very little ability to, you know, maybe I could spend a day making calls to lobbyists on K Street and lawmakers and sort of piece together some more accurate forecast, but I just do not know. I think, look, I think there are still good reasons to want to support a forced sale of TikTok by ByteDance. Like you and I made the case several weeks ago for why we were sort of leaning toward banning it. None of those underlying reasons have changed. And in fact, like the first thing I thought
When I opened up my TikTok app and I saw all of these ads, these like pro TikTok ads that were actually put there by TikTok, the company, I thought, well, this is a demonstration of the power that this company has to kind of shape the views of Americans. And I do worry about that. And none of that reasoning has changed. But I guess, man, I am just so cynical about Congress and their ability to get anything done before being distracted by the next shiny object. So I guess if I had to
handicap it. I would put it in, I don't know, maybe 50-50 at this point, but that's a total guess. Yeah, it really does feel like a coin flip. And I'm like you, I still lean on the side of wanting to see Congress take some sort of action. But in the meantime, as soon as I can earn gift cards for my TikTok screen time, well, let's just say I'll be very interested.
Have you seen the thing on TikTok? Wait, I'm going to test your knowledge of the TikTok universe. Because one particular thing about TikTok is like, you know, everyone's TikTok is so individual that I never know like what anyone else is seeing. So yours could be totally different than mine. Does the phrase, what's up, brother, mean anything to you? Sadly, it does not. Okay, we can cut this. There's a thing going around on TikTok where you're supposed to go up to your boyfriend or your husband and go, what's up, brother? And then what do they do?
I'm sad that I know this because I did a deep dive on this last night. You're supposed to go, Tuesday, Tuesday. Special teams, special players, special plays.
See, if I were trying to argue that this is a Beijing mind control app, this is exactly the sort of thing that I would be submitting to the Senate because literally not one word that you just said makes any sense. I think I might be having a stroke. Actually, I'm going to call the doctor as soon as we're finished recording based on what you just shared with me. I'm the reverse Mr. Beast. I only take. I never give.
These are apparently, and I know this because I was so mystified by this TikTok trend that I had to go all the way down the rabbit hole. These are lines, these are catchphrases from a popular Twitch streamer known as Sketch that apparently men, especially men who are into sports, know very well. Well, I think we've just identified why I wasn't super familiar with the...
information you're only interested in fake sports that's right all right well uh in conclusion tiktok is a land of contrasts and stay tuned as things either happen or do not right here on hard 4
This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.
Before we go, thank you to everyone who has sent in so many amazing anecdotes and stories about how you are using generative AI at work, both successfully and very unsuccessfully. Keep them coming. We are putting together an episode where we will go through some of our favorites and talk about them with you. And we just love hearing from you. And we're still looking for stories about Snapchat. Is Snapchat causing drama at your school or your kid's school? We want to hear about it.
Heart Fork is produced by Davis Land, Rachel Cohn, and Whitney Jones. We're edited by Jen Poyan. We're fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley. Original music by Rowan Nemisto and Dan Powell. Our audience editor is Nelga Logli. Video production by Ryan Manning and Dylan Bergeson. Go check us out on YouTube at youtube.com slash heart fork.
Special thanks to Paula Schumann, Pui Wing Tam, Kate Lepresti, and Jeffrey Miranda. You can email us at hardforkatnytimes.com.
Hi, Kips listeners. Today I'm sharing everyone's favorite lunchtime indulgence, the double quarter pounder with cheese from McDonald's. It's the go-to that keeps you full and energized for the rest of the day. It's not just a meal, it's a whole experience. You know it's fresh when you feel that heat through the bag. For those of us who know burgers, the McDonald's drive-thru is all about the double QPC. When those burger cravings hit, nothing comes even close.
Get a drip that's as far as your drip when you order a double quarter pound with cheese at McDonald's. Fresh beef at participating U.S. McDonald's excludes Alaska, Hawaii, and U.S. territories.