This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence.
How's that for a vision? Learn more at www.kpmg.us.ai. Casey, how's your week? Oh, Kevin, this week has been humming along. I have a big drawer in my kitchen, and I closed it such that a muffin tin went vertical, and now I cannot open the drawer. And it is deep enough that I cannot access the...
the muffin tin with a ruler. And so I may need to hire a handyman to open a drawer at my house. Wow. You know, this always happens to me with the drawer under the oven where you keep the sheet pans. Ah, yes. And the sheet pan sometimes gets like stuck and like lodged in a way that makes it impossible to open the drawer to like fix the sheet pan. This happens to me like every six months. It's infuriating. I'm glad we're talking about this because people don't talk about this.
But there are so many drawers in this country that just don't open anymore. It's true. And what is the construction industry doing about it? I haven't heard one thing. What is President Biden doing about this? Where is President Biden on this? Come on. Come on. Come on.
I'm Kevin Roos, a tech columnist for The New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week on the show, indie AI companies are falling apart? We'll tell you what's happening. Then, listeners respond to last week's segment about teens and social media. You'll hear from actual young people on the subject. And finally, what's the deal with Shrimp Jesus?
Casey, it has been a messy, dramatic week in the world of AI. And that's my favorite kind of week to have in AI heaven. Yes, we love a mess on this show. And we should talk about the mess because I think there's some pretty interesting things going on. You know, for the last basically year and a half, the story of
the AI industry has just kind of been a bunch of graphs that are all going up and to the right. Everyone's raising money, everyone's making money, all the models are getting better. And now I think we're starting to see some cracks in the AI industry emerge. - Yeah, the tide is going out, Kevin, and it's scooping up some companies that we've talked about on this very podcast.
Yes. So this week, one AI company, Stability AI, this is the company that makes the stable diffusion image generator, announced that its CEO, Ahmad Mustaq, former hard fork guest, was resigning from the company. Mustaq said that his departure was because he wanted to spend more time pursuing decentralized AI. And this news caught a bunch of people by surprise. What did you make of it?
Well, first of all, spending more time with decentralized AI is the new spending more time with your family. So families are out and decentralized AI is in.
But look, this was a surprise, Kevin. You know, Ahmad Mistok was actually a guest on the third episode of Hard Fork, and he made a really strong impression on us. You know, I think until Kara Swisher came on this show a few weeks ago, he was the single most confident person who'd ever been in the studio with us. It's true. He had all these stories about how he was going to use AI. And, you know, some of that sounds pretty silly in retrospect, but you have to remember, he was a very important figure in the world of generative AI. Like, I went to this party recently
back in late 2022, where Stability AI was announcing that they had just raised $101 million, like a very large first major funding round. And they were this buzzy, hot startup, and he was this buzzy, hot CEO. And they threw this huge party in the Exploratorium, and all these bigwigs from tech showed up. And it was just kind of like, it felt like an arrival of sorts.
And so now, you know, less than two years later, he's out and this company, Stability AI, appears to actually be quite unstable, which leads me to my first point about this, which is if you are going to name your company something that is going to look extremely funny in hindsight if it blows up,
Don't do that. Do not name your company Stability AI if every time the company has problems, people are going to say something like, oh, they should have called it Instability AI. Do not do that. Don't name your company Extremely Profitable or Totally Solvent or...
or definitely not a scam. Just go with some other name. That's right, Kevin. It'd be like if you wanted to start an AI company devoted to doing all of its research in the open and call it OpenAI and then move to a more closed model where you share nothing.
But who would do that? That would never happen. Be real. It would never happen. So I've talked to some people sort of in and around this situation. You know, people say on some level what happened at Stability AI, what is happening at Stability AI is a pretty standard story. This is a company that raised a bunch of money
But ever since then, it seems to have been a pretty rocky road for the company. And maybe we should talk about some of the things that have happened to them. Yeah. So I think the main thing that was happening at Stability AI that was public was that executives kept leaving the company. And I would shout out Bloomberg, who over the past year has done a lot of great reporting on this subject. But at least five vice presidents left the company within the last year, including their head of audio, Ed Newton Rex, who is no relation but has a great
Last name. He resigned in protest of how AI companies, including Stability, had been treating copyrighted data. Stability was sued by Getty Images for copyright infringement in both the United States and the United Kingdom. And so that's a lot of turmoil to have at a company in a single year.
Yeah. This company, I would say, has been on the hot mess express for more than a year now. I mean, they've had lawsuits. One of the co-founders of the company sued Ahmad Mustaq, alleging that Ahmad basically cheated him out of his stake in the company. There have been all these departures that you talked about. And investors in the company have not been happy.
be with Ahmad Mustaq for months now, in part because the company was just losing a lot of money and didn't really seem to have a business model. Ahmad Mustaq also was accused of maybe fabricating or embellishing some of his credentials, claiming that he had degrees that he didn't have and whatnot.
And so it has just been a very messy last year for stability AI. Now, I did also talk to Ahmad Mustaq last night. Oh, very good. I'll just read you some of the texts he sent me. Because I asked him, you know, why did you leave? And he said, being a CEO sucks. Elon was right. It is like looking into the abyss and chewing glass. Yeah.
He also said, quote, I am not a normal person. It is impossible for me to do distributed instability, and I don't like being a CEO. Well, I mean, points for honesty, man. Points for honesty. He also said, I asked him what his plans were in the future. He says, I don't know.
I'm going to do Indonesian and cancer models and the Web3 protocol to tie it all together. Yeah, that makes sense. And I don't think we need any further explanation of what that means. But you know, Kevin, it strikes me that some people might be listening to this and thinking, look, this seems like a fun story, but I never used stable diffusion. Was it really even that big of a deal? So, Kevin, tell us a little bit about kind of what they were making and why at
the time, this seemed like a company that we would be mentioning in the same breath as an open AI or an anthropic. Yeah, I mean, stable diffusion was one of the first, you know, big image generation models when Dali and Midjourney were first coming out. It was quite a big deal. And Stability AI had this vision that was different than some of the leading AI labs. They were going to create these tools, language models, image models, video models, and they were going to release them freely.
and people would be able to build their own versions. And this was sort of, they were pioneers in kind of the open source AI community. And they have built some models that are quite widely used and that have fans and people seem to like them. But they've also really struggled to build a business around that because if you're giving your software away for free and you're not charging people to use it, that really eats away at a
potential revenue stream. And so I wonder, you know, if they ever had a plan to make money, they obviously they had some revenue. It's not like this company has never made money, but they were not making enough to satisfy their investors. And ultimately, you know, Ahmad Mustaq ends up leaving the company. They've replaced him with two, a
Co-CEOs and Ahmad, when I talked to him last night, seemed to think that the future of this company lies in kind of the media models, that they were not going to sort of compete with GPT-4 and Gemini and all the text-based models, but that there was actually a lot that you could do to make money when it comes to image-generating models and movie-generating models.
What do you make of that? Well, so, you know, the case that I would make for stability mattering is that in 2022, I was able to get a stable diffusion model running on my M1 laptop and generating images just via text. Now, if you've done that using something like DALI on the web, that might not sound very impressive. And in fact, the images I was getting then do not compare to the images you can get today.
But the fact that I could run the entire model on my laptop was one of those whoa moments. And Ahmad came on our podcast to talk about this a month before ChatGPT was even released. So sort of even before that sort of big bang moment for generative AI, they were working on this stuff. They were doing really cool things. So that's why I was sort of invested in what was going to happen in this company.
And you've certainly shared enough reasons for why it didn't work out for them. But I would say it raises even more than that. And I think number one, Kevin, is just you cannot overstate how expensive it is to compete at the highest levels when you are competing against literally the richest companies in the world in Google and Meta and Microsoft.
And two, you can't overestimate how hard it is just to get your hands on the infrastructure that you need to do this stuff, right? What often gets called compute. Do you have access to the GPUs that are necessary to train these next models? So I think just on those two axes alone, it's super hard to run a company. And then you throw in the challenge of, you know, building a consumer business and a good product and going out to market. And yeah, you can see why a company might not realize all of its ambitions.
Yeah, so I think on some level, this is sort of a standard business story. Startups fail all the time, even ones that raise lots of money. And there are management and leadership changes. But I think it doesn't help that they were, you know, they were sort of early in a market that has since become very, very competitive with many of the largest companies in the world, throwing billions of dollars into trying to train their own models. And it's just very hard to compete with that, even if you do have a ton of venture capitalists in your corner. All right, well, so enough about
Is there another big, interesting AI company out there that's also going through it, Kevin? Yeah, so last week, Microsoft announced that it was hiring away two co-founders and a whole bunch of employees from Inflection. Inflection is an AI company that we've talked about a little bit on this show. They are best known for their chatbot, which is called Pi, which was sort of marketed as a more personal chatbot. Some compared it to almost an AI therapist company.
And they had raised a ton of money from venture capitalists to build out future versions of their AI models. And they were run by these very experienced AI leaders, including Mustafa Suleiman, who was one of the co-founders of DeepMind.
He is joining Microsoft as a bigwig in their AI division, along with many of Inflection's employees. This was a bombshell when it came out. Everyone I know who follows AI closely was talking about this, gossiping about this, trying to make sense of it, because this was one of the highest flying AI startups just a year ago. And now they're basically being sort of dissolved and reconstituted within Microsoft. And we should say this is a
bizarre deal, right? Because this is not Microsoft acquiring Inflection. They are not buying the company outright. Instead, they are basically hiring the majority of the staff and striking a licensing deal. Microsoft, according to the information, is going to be paying Inflection $650 million for the rights to make Inflection's models available through Microsoft's Azure cloud service.
And Inflection reportedly has also agreed to use that money to pay back its investors, maybe the value of their original investment plus a little bit more. But this is a strange structure for one of these deals. Yeah, because not only are they getting paid out, but they also got to keep their equity in the company. So what is happening?
Well, essentially, Microsoft and Inflection are finding a way to pay off all of the investors in Inflection so nobody stamps their feet about all of this. Microsoft gets access to all of the top talent at Inflection or probably most of it.
Inflection gets to continue on as a kind of husk of itself, so no one can say that Microsoft actually acquired the company, but Microsoft gets all the upside anyway. And I have never seen a deal like this in Silicon Valley. Yeah, and some people I've talked to have described this as a non-actualization.
acquisition, acquisition. Basically, you know, it's not easy to acquire a startup if you are one of the biggest tech companies. Regulators in the U.S. and Europe have placed a lot of scrutiny on tech acquisitions, especially by the biggest tech companies. And so I think there is an assumption if you are a big tech company looking to buy a smaller tech company that you're not going to be allowed to do that, or at least it's going to be challenging and regulators are going to challenge your right to do that.
And so by structuring the deal this way, where it's like we're not acquiring the company, we're just hiring away the leadership and many of the top talent and licensing their models, Microsoft gets to kind of dodge the regulatory scrutiny that it might be under if it tried to buy the company outright, which is very clever and something that I expect if you're a regulator, you're looking at that and going like, ah,
Ah, we didn't think of that one. Yeah, it definitely feels like one of these curses foiled again moments, right? Because, you know, think about all of the really canny investments that Microsoft has been able to make in AI over the past few years. Most famously, they are hugely invested in OpenAI. When OpenAI started to fall apart last year, they basically swooped to the rescue and helped to ensure that Sam Altman returned to power and protect Microsoft.
a very large investment in that company. And as Kevin, as I know you remember, one of the ideas at the time was that if they couldn't restore Sam Altman as CEO of OpenAI, they would basically put him in charge of AI at Microsoft. Well, fast forward to today, and now they still have that investment in OpenAI and Microsoft.
They just went to hire one of the biggest players in the space, one of the co-founders of DeepMind who founded Inflection, and now he's going to run AI and Microsoft. So they have really hedged that bet. And as if that weren't enough, they're also invested in Mistral, which is a very hot French AI startup that has just a huge pedigree of people that worked at all the big AI companies before it. So essentially, whoever wins in AI, Microsoft is just poised to reap rewards.
a huge amount of the upside. Yeah. Somebody I talked to this week described it as sort of Microsoft's attempt to do a land grab, basically to spend a bunch of money to get all of the best AI people and companies and models sort of under their roof or if not officially under their roof, then at least to take stakes in them or to be their cloud provider or something. And that they're using this moment when there is some weakness in the industry, when a lot of these companies are not making money yet to just hoover up all of the talent and all the resources that they can. Yeah. And so
How do we feel about what Microsoft is up to here, Kevin? I mean, I think it's smart strategically for them. Obviously, they now know after what happened at OpenAI last year that there are downsides to being a minority investor in an AI company. You don't
control it. Microsoft now does have an observer seat on the OpenAI nonprofit board, but they are not really in control of that company. And that's by design. They structured that deal in a way where they wouldn't get majority control, presumably for some of these same antitrust and regulatory reasons.
But yeah, if you're them, you do have to be looking at what happened at OpenAI last year and saying, well, we better have a plan B and a plan C if something happens to that investment. And so I think this move, hiring away inflection, is a good way to sort of create an insurance policy for themselves if something does happen at OpenAI. But you wrote a newsletter this week that I thought was interesting that I want to talk about, which
basically said that these two stories, the stability AI leadership transition slash, you know, basically unraveling and the inflection quasi acqui-hire by Microsoft are sort of part of a trend that we're seeing a
sort of faltering of what you might call like the sort of middle class of AI, these companies that are not the Googles, the Microsofts, the Metas, that are sort of one tier below that, that they're really struggling. So explain what you wrote and what you meant by it. Yeah, well, look, when generative AI first hit the scene, there was a lot of optimism that this was going to be a moment in the tech industry akin to when the App Store first landed on the iPhones.
And all of a sudden, you had a platform that could support all these new kinds of businesses, whether it was Uber or Dropbox or mobile gaming. All of a sudden, entrepreneurs just had access to this giant new global market and could
invent a bunch of new stuff. And I think there was some optimism at the start of the generative AI moment that this was going to be similar. And so you had all kinds of investors pouring billions of dollars into startups and lots of little teams leaving Google and Meta and other companies to start up their own businesses.
And what I think we've started to see this month, Kevin, is the tide is starting to go out there. It is starting to dawn on some of these companies that the giants in some cases really are too big to fight against. The giants are the ones who have the money, they have the computing power, they have all of the resources necessary to train those large frontier models, and they have the product shops
and the distribution strategy to actually turn those into real businesses. And I think you've seen companies like stability and inflection take a swing at doing all of that themselves of trying to advance the state of the art on the tech side, while also building a business, maybe building a big consumer business. And, uh,
They are just not succeeding. So that I think is really notable and it makes me a little sad. Why does it make you sad? Because I am somebody who wants there to be more smaller companies. You know, I don't think that the
ideal state of the world is one where there are four or five tech giants. I think it's one where there's lots of medium-sized companies who are all competing, who are giving a lot of choice to consumers, who are not dominating the landscape. And
And whenever I see Microsoft wind up in a situation like this, or, you know, maybe Meta wind up in a situation like this, I just think, oh, well, you know, so much for the chance that we had to unsettle the landscape. It looks like the new bosses are going to be the same as the old bosses. Yeah, I think that's right. And this moment in AI really does remind me a little bit of the sort of moment maybe a decade ago where you had...
Uber, which was raising all this money and building this huge business. And then you had all of the kind of Uber wannabes, you know, the Uber for X, right? Uber for laundry, Uber for dog walking, you know, all these different sort of flavors of the same fundamental business model as Uber pioneered with the kind of gig worker.
And you just had investors lining up to just shower these startups with cash, thinking maybe this will be the next Uber. Maybe this will be the thing that makes me, you know, 100 times my original investment. And, you know, most of those companies failed. And it wasn't for lack of trying. It just...
the sort of nature of the venture capital business is that you kind of spray and pray, right? You shower a bunch of different startups with money. Most of them fail, but the ones that succeed, you know, make enough for you that it pays back for all of the losers. So I think that's really what's happening here in AI. Investors are just kind of
throwing money at anything that looks like it might have a pulse, that it might have a chance of getting that product market fit and making money. And I don't think they're going to be too dissuaded by some failures along the way because they pretty much expect it. I think that's right. Now, I talked to somebody at one of these companies after my column this week, and they said to me, look,
What you're seeing actually could be a temporary phenomenon. Right now, the key limiting factor and the ability to start a great AI company is access to computing resources. And that is a temporary phenomenon.
within, I don't know, a year, a couple of years, if you want to start your AI company, you're going to be able to get your hands on more of the chips and the computing power that you need. And that is when you're going to go and be able to build your great business. And so it may look like the Giants are winning handily for the next year or so, but eventually you're going to see the challengers rise up again. Now, this argument may be a little self-serving. We're going to have to check back
on this in a couple years to see if it's true. But if you're looking for some sliver of optimism among the sort of great washing out of the indie AI companies, I leave that for you. Yeah, I think there's a little bit of optimism for smaller and medium sized AI startups here, in that the big money trucks have not really started to arrive yet. You know, we've seen a ton of funding come into AI startups over the past year or two. Amazon just this week announced that it was investing in
another $2.75 billion into Anthropic. That's on top of a bunch of money they had already invested. They invested about $4 billion so far. And my colleagues also reported at The Times this week that the Saudi Arabian Sovereign Wealth Fund, the PIF, is
is considering raising a fund of $40 billion with the help of Andreessen Horowitz to invest in AI startups. $40 billion is a lot of money, even in the world of AI. And so I think we are going to see another wave of kind of institutional investors who are desperate to get in on the AI boom, just funding tons and tons of startups. That money is not all going to go to Microsoft and Google and Amazon.
Well, I'm disappointed to hear that Saudi Arabia is investing in that and not journalism. It would be great to see $40 billion go to critical reporting of that regime, but maybe next time. Maybe next time. All right. Well, we'll keep tabs on the messes in AI, but I would say what you're seeing, at least sort of the vibe that I'm picking up, is that people don't think the party is over. They don't think the
party is ending, but they do think that there is sort of a, you know, going to be a little bit of a washout as some of these companies that raised a ton of money without very clear paths to profitability start getting tough questions from their investors about, well, what's your plan to actually make back our money? That's right. And in the meantime, if your AI company is falling apart, we'd love to hear from you. Email hardfork at nytimes.com. All right.
When we come back, we'll hear from some of our listeners about our segment from last week with Jonathan Haidt about young people and smartphones. Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.
Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.
I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret. It takes a lot of time to find people willing to talk about those secrets. It requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light.
If you want to support this kind of work, you can do that by subscribing to The New York Times. So, Casey, last week we talked with Jonathan Haidt, the social psychologist and author, about how smartphones and social media are affecting young people. And I think it's safe to say it was one of our most polarizing segments we've ever aired on the show.
Yeah, I mean, this is just one of those where everyone has a really deeply felt personal opinion. Some folks are convinced that social media really is the primary cause of the mental health crisis in young people. Others think it is a moral panic. And we really heard from all corners of that debate over the past week. Yeah, you know, we always love to hear from our listeners. And I just thought some of these responses were so thoughtful that we should actually just...
call up the people who wrote to us and bring you their perspectives. And we should also just say, like, part of why we are talking about this subject is not just because it's something that people have strong feelings about or that we're getting older and we feel like the kids these days, you know, are using their phones too much.
This is a really active debate right now and a real inflection point in the history of the internet. I mean, just this week, Ron DeSantis, the governor of Florida, signed a new bill into law that would prohibit kids under 14 from creating social media accounts. There are other laws around the country that are making their way through state legislatures with similar things in them. This is a really, really important debate right now and an ongoing policy discussion. So I really want to hear from
our listeners, from young people especially, about how they're thinking about this question. So first up, we're going to talk with Jordan Lucero. Jordan is a high school junior who sent us an email pushing back against some of the arguments he heard here last week. In his own words, he is addicted to his phone. But as a gay student, he is skeptical that taking away his smartphone in school would improve his own experience. So let's bring Jordan here.
Jordan! Hi! Jordan! How are you? It's an honor to be here. What's going on? I just got out of class. And how was class today? It was good. We watched a movie, so we didn't do anything. That's what I like to hear about the American education system, is why read a book or write anything when you can just enjoy the finest of what Hollywood has to offer?
Well, hey, thanks for hopping on with us. You know, last week we interviewed Jonathan Haidt about his new book and we got some really great emails and yours struck me. Something I have talked about on the show is being worried that if we take social media away from kids, it could make life harder for LGBT kids in particular. Talk to us a little bit about that. Why don't you like this idea of taking away phones in schools?
The clearest thing, I tried to make it clear in the email that I was writing at 1:00 AM. I just re-read it again. I was like, "Oh wow, I was pissed." If our teachers don't trust us as people to manage our time, it completely changes the dynamic of the class. I think if we need to look up something that we didn't get, like we missed a note or something, it helps us learn better. If a teacher just takes that away, it just rubs us the wrong way.
If I don't have friends in the class, it's just kind of miserable. It's just the basic thing around respect. Right. Well, so explain this a little bit because, you know, when I was in high school, if I didn't have a friend in the class, I would just sort of have to talk to whoever was around me to entertain myself. But you have a smartphone. So you told us that you actually just sort of text people who are not in your class during class. Tell us how that works. So I have so many group chats and I'm very dependent on them because a few weeks ago when AT&T was out,
I couldn't text anyone for two hours. And it was honestly, I had no idea I was that dependent on having constant access of just saying whatever I want. So it helps me just put my thoughts into words and words.
It just helps me feel better about what I do every day. - Now, some people might hear that and say, "I don't know, Jordan, that sounds like that must be pretty distracting during class." And you know, I know in some of your classes you just have to watch a movie, so it's probably fine, but presumably there are ones where there are lectures and homework and stuff, so don't you feel distracted by your smartphone all the time in class? - Definitely, and I should be a lot better at managing it, but mostly in my classes, most people are focused
and they're using their phones to either reply to quick text or they might be scrolling through, but they're not doing it during class. They're doing it in off time, so it's not hurting anything. I wouldn't say they're missing anything at all. Sometimes my phone helps me if I have to look up something, I'm able to do that. If we're doing downtime and I see another teacher posting assignment, I'm going to get a start on that. I'm doing emails, I'm managing the other clubs that I'm in, I'm doing all this stuff. I'm not just scrolling through
brain rotting, just like how you guys use your phones to do things. We're also using them as a tool to get things done. Well, Kevin mostly uses his phone to rot his brain, but I think it's actually really inspiring how you use it. And I'm hoping he takes a couple of notes from you. Yes. Teach me your ways. Jordan, do you, uh, this is a little off topic, but do you, do your group texts have, uh, names? Cause one thing I have heard is that teenagers group chat names are totally unhinged. Um, I never thought about it.
I guess they're all kind of insane. So we have Busy Bees because one time I misspelled Busy Bees. Peach emoji. Just that. Be who you are and Badoosie Bus. Wait, what was the last one? Badoosie Bus. Badoosie Bus. Kevin, you don't have that group chat yourself?
Wow. We have so much to learn. Okay. Just a couple more questions. Jordan, you know, one thing you have me wondering is like, I can understand why the internet is useful to you. You know, you look up facts, like all of that makes a lot of sense to me. When we were talking to Haidt, he tried to make a distinction between the internet and social media.
and said, look, internet access is fine. You want to get on your laptop and look stuff up. That's fine. That's like not hurting anybody. But it's this sort of the social media of it all, where you're expected to take pictures of yourself and there are light counts underneath. And you mean, maybe it makes you feel self-conscious about your appearance. That is the really harmful thing. Like, do you have thoughts about the kind of social media in particular and what it might be doing to your school? Yeah.
I think it's really a net positive, but that's the thing. He tried to like distinct between the internet and social media and they're so now intertwined that you can't really make that distinction anymore. Like if I want to get the news, I'm doing that on something like threads or Twitter. That's just one example of what's so intertwined.
I'm curious, you know, again, there's this big push to take away social media from kids, especially kids younger than 16. And I just wonder, as a gay student, how you think that would affect LGBT students? It would really harm the self-discovery process, I think, and finding a healthy community. Because...
I was on Twitter at age nine, which is absolutely insane. And it's for the better. I've been lucky enough to have that experience where it's a healthy experience. But what were you doing on Twitter at age nine, Jordan? Were you were you dunking on people? No, my Minecraft like YouTubers are like follow me on Twitter and I would just do anything. And I just so you were you were tweeting at Minecraft YouTubers. Yeah. Okay.
But it helped me see a large variety of viewpoints about a lot of social topics that would just be on trending topics every day. And it taught me a lot more about the world and it's made me a better person for it. Yeah. So am I hearing you say that you feel like social media helped you come to a sort of very positive self-understanding? Yeah, definitely. And then my last question was just whether you're doing anything in high school to get the word out about Hard Fork.
I'm going to post this on my Instagram story if you guys put it in the show. Yes! Well, this is fantastic. I truly like your email meant a lot to me. I really appreciated you taking the time not just to listen to the show, but to write in. And thank you for indulging our questions. Thank you, Jordan. You're inspiring me to come up with funnier and more unhinged names for my group chats. So for that alone. Badassi bus, baby. I'm grateful. Thank you, guys.
Next up, we're going to talk to Maya Rail. Maya is not a teen. She's 24, but she wrote in to talk to us about this issue. And she pointed out that the things that our conversation with Jonathan Haidt might have overlooked is how valuable social media can actually be for students and in particular for student athletes. She's an athlete. She runs track and field. And in her experience, social media has been an important way for female athletes, especially to get opportunities to
related to their athletic accomplishments. Hello, Maya. Where are you joining us from today? So I'm actually in California. I'm on the Wisconsin track team and we've got a race in like the Bay Area this week. So you're in our neck of the woods. I am, yeah. Well, welcome to town. We hope you're having a good time here. So Maya, what has been the value of social media in your life as a student-athlete?
I think it's a really nice way for me to figure out what different accomplishments people are having and just get news about who ran a fast time, what athletes I should be following, learn a lot about what's going on in different people's lives. And tell us what kind of athlete you are. What's your sport? So I'm a runner. I run the mile 1500 meter 5K.
Awesome. So further than Kevin runs in a typical day. Hey, you don't know that. Probably faster, but maybe not further. I also think that part of what you're bringing up here is that if you're a really serious young athlete and you want to compete at the highest level, Division I, you want to go pro, you might actually be at a disadvantage if you're not posting on social media, even from the time that you may be a freshman in high school. Does that sound right to you? Yeah.
I think that's absolutely correct. I don't know how much the listeners know about NIL, but that's name and image and likeness. And it's this alternative way that student athletes can harness the power of social media in order to get paid by businesses. It can be really important. I mean, there's a lot of inequality in it, but this is like an opportunity for a lot of female athletes now.
And male athletes as well to just like have more of a platform than they otherwise would have get monetary gains from it, which they can't get directly through their schools. And so this is can be a pretty important financial aspect for certain athletes. Yeah.
Now, my guess is you would agree that for all of the benefits that you just raised, which I think are very real, social media can be a double-edged sword. And I wonder if you have seen the flip side of it in your life. Are there people in your life who you feel like social media has contributed to anxiety or depression or has just kind of, you know, made people really upset over the years?
Oh, I mean, absolutely. I think the like body image side of it can be super damaging to people. I mean, I think that there's also just these like echo chambers and rabbit holes where like, as soon as you pay attention to one thing, it just feeds you more and more of that. And you can see the different ways that these algorithms definitely can influence you. So Maya, if you don't think that we should heed Jonathan Haidt's recommendations and keep
smartphones away from kids until they're in high school and social media away from kids until they're 16 or older. If you don't think those are the solutions, what do you think should be done, if anything? Or do you think this is all basically just sort of a scary narrative that adults are telling about kids these days and we should just let the kids do their own thing?
Oh, it's tricky. I mean, I feel like I don't have the answers to it. I think it's really complicated. I mean, I guess, like, I kind of think about it a little bit like drinking, where, like, the types of parents that restrict their kids, you know, like, you can't have a sip of alcohol, like, you have to stay in, like, you can't do any of that, like, as soon as they graduate, as soon as they leave, like, 18, right?
What happens when they get to college is that they just go way overboard. And I saw that, like, so many times that, like, the people with the strict parents that didn't give them any freedom, it just didn't go well. And so I think that parents should be able to trust their kids. Like, you know, maybe a 12-year-old shouldn't be on social media. But, like, a 15-year-old, I mean, that's, like, a –
That's like mostly a real human being at that point who can make their own decisions. Yeah, somebody else who wrote to us this week pointed out that, you know, we don't let kids younger than 16 drive in most states, but you can get your temporary permit. There's sort of a there's a there's a sort of ramp.
for you to like gradually learn how to drive and be given more and more responsibility. And then when you're 16, you sort of get the whole thing. So maybe that's what we need is some kind of like, you know, training wheels for people who are, you know, they're sort of 14 or 15. Maybe they're not ready for the full social media, but they can get their learner's permit. Yeah, I like that idea a lot. I think that totally makes sense. Maya, when did you create your Instagram account? How old were you?
I was probably 11 or 12. Okay. And the reason I'm laughing is that, you know, you're not supposed to create an account till you're 13. Would you say that Instagram threw up any roadblocks to be like, hey, it sure seems like you're 11?
Oh, I mean, I just, I like knew that I had to be, what is it, 13 or something. I just like lied about my birthday. Like literally everyone in my class did that. So your whole class is 11 and they're on Instagram and they're posting photos and collecting likes in the, I guess this would be what, the sort of sixth grade? Yeah. Yeah. Okay. I have to go lie down now. Okay.
Oh, I mean, here's what I appreciate about this discussion. You've hit on this tension that is unresolvable for me, whereas I think it's very clear that social media is hugely beneficial to some young people. And I think that it has very positive effects for them. And I also think that social media has really negative effects for some group of people. And I don't know exactly how large the different groups of people are, but the area where I struggle is how do we design a
policy or a set of policies or systems to sort of ensure that we get the most good out of this for the most number of people while minimizing the harms. And I truly don't know how to do that.
I don't have the answers to that either. I don't know. Maya, that's why we brought you on the show. You're supposed to give us all the answers. I mean, I said time limits for myself and like sometimes that works, but sometimes I just like ignore them. But yeah, I don't know. I mean, it's definitely tricky, but I think it's like part of the world that we're in right now is that you can waste so much time on your phone or on your screen and like watch
learning those skills of being able to not do that, whether you're 15 or 18 or 40, I think it's difficult no matter what. Well, it's great to talk with you. If you do come up with a solution to this problem, we hope you'll call back. But in the meantime, we hope that you have a great meet this week. Thank you so much.
Next up, we're going to talk to Jack Campbell. Jack is 20 years old. He's in college and he wrote to us with a really unsettling account of how frequently in his life he has learned that a friend or acquaintance had attempted suicide. He wrote to us, quote, I've not experienced childhood in any other decade, but from what I'm told, this was not the situation of 20 or even 30 years ago.
Now, we should note, if you're in crisis, please call the Suicide and Crisis Lifeline at 988, or you can contact the Crisis Text Line by texting TALK to 741741. To learn more about Jack's experience, we have him a call.
Hey, how you doing? Good. How are you? Good. Sorry about my kind of barren dorm wall. That's okay. I actually find it more attractive than our studio, which was designed by professionals. Where do you go to school? William & Mary. Oh, very cool. And this is your dorm room that we're catching you in? Yeah.
Yeah, yeah. I'm a resident assistant. So this is like my little like office area. And then I've got the sleeping area. What is being an RA? When I was in college, RAs mostly would give out condoms and tell people not to be so obvious about smoking weed in the dorms. What are your main duties? Yeah, yeah. It has not changed. And sometimes I have to do a little pizza party occasionally, you know. Yeah.
Well, first of all, thank you for writing into us. You sent us a really touching email. You know, maybe we could just start by having you talk about your reaction to when we talked to Jonathan Haidt last week. And what has your own experience been here?
Social media has been obviously like a huge part of my life. And I've, I've been on social media since I was like, you know, 11, 12 and I've had a, you know, I've had an iPhone for forever and it's worked out great for me. I'm in college and I like to think that I'm doing relatively well and, and all that. But the data that Jonathan Knight, you know, kind of brings up about, you know, rates of depression and rates of suicide and all that, you know, those data points are my friends and I,
It's one of those things where his proposed solution of preventing people from creating accounts specifically, I don't think that it's going to be harmful if you're just elect to see, look, but don't touch until you're 16. And I think it's something that we really do have to do something about. Clearly, it's clearly a crisis.
Have you had friends or people who you're close to who have had serious mental health struggles that you or they would attribute to use of social media? Definitely. A hundred percent. Multiple. And I don't think that you can get away from the fact that so many of our interactions these days are mediated by these online platforms and these social media and stuff like that.
Can you say a little bit more specifically about what aspects of social media do you think are contributing the most to depression and the desire to self-harm?
I think when you have, you know, timed photos and, you know, snap map, you really you really want to know our kids having a party without you. You can see them on a map. They're all hanging out together. And I'm I'm at home, you know, and that's, you know, and all you can do is just like it's just like watch them drive around. And it's very much the case that you get on Instagram and, you know.
Everyone else is at events. Even if you're at some of these events, just the fact that there are events that you're not at, you get into that kind of self-comparison mode, I think. Yeah.
Jack, you mentioned that you've had social media basically your whole childhood or your whole adolescence and that you feel like it's worked out pretty well for you, even though you do know lots of people for whom it has not worked out well. Do you think there's something different about the way that you use social media versus some of your friends? Or is it just kind of luck? I think a lot of it is luck. I can't deny the fact that I'm male and the statistics don't look nearly as bad for us.
But I don't necessarily think that anything was really different about the way that I used social media in comparison to my friends. Do you think the idea of not letting kids have social media accounts until they're 16 would be popular among your friend group? Or do you think you're more of an outlier? I don't think I'm that much of an outlier. I think Jonathan really hits the nail on the head where if it's kind of a collective action problem, I think that that's going to be a reasonably popular thing among people of my generation.
Jonathan Haidt said that he asked students, how many of you wish TikTok were never invented and most of the hands in the room go up? Would you be one of those hands? Yeah, I go through periods where I'll just delete TikTok because I need to do homework occasionally. Actually, my girlfriend, she really actually kind of struggles with this. She tries to delete Instagram. She's cut it down to Instagram Saturdays. And then every so often, she'll redownload it on a Tuesday and...
It really sucks for her. She gets really distraught about it. I think this is such an important point. I talk to the folks over at Meta and Instagram a lot, and they push back really hard on me when we talk about this stuff. And they say, Casey, you're falling prey to this moral panic, and this is just sort of comic books and heavy metal and video games all over again.
Um, but like when you talk to the kids who are reading the comic books and listening to the heavy metal and playing the video games, like none of them were saying, take this away from me. It's too dangerous. Or like, I have to set aside five days a week where I can't even look at this thing. And I'm going to feel distraught in the days in between there, there is some emotional level that this stuff really, uh, scares a lot of people and it causes them, um, a lot of grief. And, uh, that is something that I just think that the, the platforms are really refusing to reckon with. Yeah.
I completely agree. I completely agree. Oh, well, thank you so much for joining us. I really appreciate your perspective on all this. Thank you guys so much. Thank you really. I really appreciate it. Jack, thank you so much. And your mustache is so cool. It's so cool. It's so funny. I called my dad this morning. I was like, should I shave it?
We also got a lot of emails and social media posts from teachers, people who work in schools and see up close the effects that technology is having on young people every day. So we're going to hop on a chat with Brendan Kelly. Brendan is a teacher. He's been a teacher for more than 20 years. He currently works at a high school in Richardson, Texas as a digital coach. And he wrote in with some observations about how he sees smartphones and social media affecting his students.
Brandon, how are you? Hey, how's it going? Good. Where are we catching you today?
All right. So I am at school. I'm at J.J. Pierce High School here in Richardson, Texas. And what do you teach? So I am a digital coach. And what I do is I actually have a small group of elite students who we will meet with teachers and we'll just help them with lesson plans, help them integrate technology into their lessons. Okay.
Sounds very relevant to the subject we're going to be discussing today. Brendan, after our episode last week, you sent in this story over email that really unsettled me. You said that when you give a state test, you have to keep the classroom quiet after the last student turns in their test. And before iPhones, you hated that because it was impossible to keep kids from talking to each other, even if they weren't friends. But now that iPhones exist,
there is apparently just an eerie silence that fills the room. So talk to us about kind of classrooms before and after smartphones.
Yeah, for sure. So that specifically, that is absolutely right. And the deal is, is that before iPhones and before social media, I would hate when that last test would come in because then that means that I would have to really, really struggle to keep these whispers down. And these are kids, like 30 kids who kind of know each other, kind of not, but it would be a real struggle to keep it just from erupting into just a whole bunch of talking. However, now it's
It's super easy. And that's not because they have their phones and they're just like absorbed in their phones. They still have their phones like away in their backpacks. But the problem is, is that they...
I don't know if it's like a lack of motivation or it's a lack of skills or what it is, but a lot of the times they'll just kind of sit there and wait for those phones. And so it's really easy to keep them from talking to each other because there's not a lot of motivation for them to talk to each other in the first place. So your school is not a school that bans students from carrying their phones on them during the day. Does your school have any rules about how students can use smartphones?
It does. Yeah. So the deal is, is that like theoretically, they are supposed to not have their phones out of their backpacks until lunch, and then they can use their phones during lunch, and then they have to put them away. Now, we do have some pilot schools who are doing a pouch program, like your guest talked about before. And at first I thought, well, that's ridiculous, because what they do is they'll take a broken old
phone, you know, their brother's phone, and then they'll put that in the pouch and then they'll say, all right, let's go ahead and seal it up. And, you know, and then that's fine. And then they're still looking under the table. But... Yeah, take my BlackBerry. You can lock that up. Yeah, right. Exactly. What is this flip phone? So I did not believe in it at first, but I will say I talked to a teacher who was in a school where they're using that. And she says that...
the kids are not only doing it, but they're kind of happy about doing it because, and I think like your guests have talked about this before as well, is that it leveled the playing field. Like, okay, everybody's doing it. Everybody is like off of social media in my school. I guess I'll do it too. She actually gives them bonuses. Like when she sees that the phones are in the pouches, the pouches are like sealed and everything is great. And she says she gives a lot of bonuses.
Like extra points for tests and stuff like that or cash? Oh, we're teachers. Kevin, where do you... I don't know, candy? I don't know. There are lots of ways you can give a bonus. Teachers are just passing out $100 bills over here. They're giving us cash. We're like, hey, can I borrow?
But, Brandon, this is really interesting because what you're saying is that it seems at least for some of the students in your district, they really are looking for an excuse to not have their phones around. And as long as no one else in their line of sight has their phone out, it makes it OK for them to focus on whatever they're supposed to be paying attention to in class.
Yeah, for sure. And, you know, like I've been asking around like kids, OK, tell me your opinions on phone, social media. And it's pretty much 100 percent, you know, that we understand that it's.
We understand that it's bad. And we also use it for like, I talked to one kiddo today who was saying that she uses it for like anxiety to calm her anxiety. She floods her feed with like, she follows all sorts of positive folks and
And when she gets a little anxious, she takes out that phone, reads about, you know, it's all going to be okay. You know, this is a moment and this will pass. And then she puts it down and she's good to go. It's a little bit of like self-medicating. But that's I think that like that's the key is that we...
Anytime you're fighting against human nature, you're going to lose. You're fighting an uphill battle. And so instead of just saying, all right, that's it, we're pretending that phones don't exist, I think we need to teach intentionality behind it.
Right. I totally agree. I mean, I agree as well. But I think, you know, what Jonathan Haidt would say is that it kind of doesn't matter if you teach kids to use these phones intentionally because social media in particular is just structured in a way to make you think about it constantly, to drag you into rabbit holes, to make you feel self-conscious about your appearance. And so
Even if you want to use it in a positive way, you might struggle to do that. So, Brendan, I just wonder kind of what your view has been overall of how students in your district are using social media. If you have a view, sort of, does it feel like a net positive, a net negative, more mixed? What is your sense of how it's playing out?
I feel like it's really the old story. You know, I mean, I feel like I've been talking to a lot of kids and I've heard the same thing now that I've heard like, you know, five years ago is that I know that I shouldn't be on this phone for this long. I have talked with kids who over the summer had 18 hours of TikTok daily. That's too much.
And she was like, but that's okay. Now it's just down to 10. And so like that, that's the thing is that we, we understand that there's like, that it's addictive and it is for sure. I mean,
Dr. Haidt said that if you ask students, do you wish it never was created? They say yes. And, you know, I mean, I'm definitely in that camp as well, but it is, it is created. And I think, yeah, it would be great if we could do things to rein it in for sure. But we also need to teach the ability to the ability to put it down. I think that's a big thing is the ability to disengage. Yeah.
Yeah. Absolutely. Well, that's all I have, Mr. Kelly. Thank you so much for your time. Yeah. Yeah, if you wouldn't mind being my digital coach, there are moments where I would need one. Hey, I've reached peak digital coach
talking to you guys. I can't even believe it. Thank you guys so much. You know, I'm having this problem with my printer. Maybe you could... Don't even start, please. We've had too many problems today. You don't even know. You've just triggered me and I need to go take a nap. Brendan, I'm going to email you privately to tell you what you should charge Kevin for digital coaching because I know what he can afford. Thank you.
When we come back, the powerful new religious figure who's dominating the Facebook feed. Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.
Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. All right, Casey, today I really wanted to finish the episode by talking about shrimp Jesus. Oh, finally, we're talking about it.
So for those of you who are not brain poisoned like us, shrimp Jesus, it refers to an image that went viral on Facebook recently that has kind of become kind of a stand in for lots of people's concerns about what AI generated content is going to do to our online media ecosystem.
Shrimp Jesus, just to describe the picture a little bit, it is an image or maybe a series of images. There's sort of a theme of shrimp Jesuses. The plural is actually shrimp Jesus. Shrimp Jesus. Facebook is flooded with shrimp Jesus. Yes.
And it is pretty much what it sounds like. It is a figure of Jesus Christ who appears to be floating in water and is made out of lots of shrimp. Yeah, I mean, Kevin, we all remember where we were when we first saw Pope in a puffer coat, right? Yes. Oh, of course. This is one of the first AI-generated images to really make the world stand still. A picture of Pope Francis sort of in what appeared to be this very cool, very fashion-y, white, puffy winter coat.
But we're now in a moment where it seems like every day when people are opening up Facebook, they're seeing some new uncanny thing, some new eerie thing, and it all really crystallized with Shrimp Jesus.
Yeah. So this was first reported on back in December by 404 Media. They highlighted the fact that a bunch of AI-generated images had gone viral on Facebook. Lots of men kneeling next to very realistic wood carvings of dogs for some reason. These images were being shared alongside captions like, I made this with my own hands. And basically, people would just
leave comments saying like, wow, that looks great. In fact, a post with an AI-generated image was one of the 20 most viewed pieces of content on all of Facebook in the third quarter of last year. It got 40 million views. So this month, two researchers, Rene DiResta and Josh Goldstein from Stanford and Georgetown, respectively, put out a report that used Shrimp Jesus as sort of its lead image.
But the report is really interesting, and I thought we should talk about it. It's called How Spammers, Scammers, and Creators Leverage AI-Generated Images on Facebook for Audience Growth. This is a preprint that they put out. And basically, they are exploring the ecosystem of AI-generated images on Facebook and why some of these pages are posting these, to my eye, very obviously fake images.
Well, and what did they find?
or a sand castle that's like more realistic than any sand castle that has ever been created. And I guess one question that I had about this is like, do the people who are sharing these things actually understand that they are not real? And what do you think the answer to that is? I think there's probably a lot of gullible people out there who just, you know, see these things and think, oh, this is this is real. How impressive is that?
I also think there are people who probably don't care. You know, I've had this experience recently that Facebook has decided that I'm really into cabin core. You know, these like beautiful images of cabins in the mountains, very cozy and beautiful. And a ton of these are just AI generated. Very obviously, if you look
even a little bit closely at them. And so now, whenever I see an image of a cabin or something beautiful on Facebook, my first thought is always like, is that real? Does that beautiful cabin actually exist? And a lot of the times the answer is no, it doesn't. It's just being created for the purpose of getting attention and engagement. You're telling me that they're not just posting these images to try to grow awareness of the teachings of Shrimp Jesus? No.
Maybe some of them are, but some of them are also linking to e-commerce stores and pretty low quality news sites. These are basically just the thing that they will dangle in front of people to get them to like or subscribe to a page so that they can feed them stuff that's going to make them money or profit.
benefit them in some way. So in other words, Kevin, it sounds like this is the latest iteration of a very old technique, which is you try to come up with the most universally appealing images imaginable, you know, like baby animals has often
been a very popular one. And you grow the following, and then once you get it to a certain size, you sell the page, and then people just start raining spam on these poor, unsuspecting baby animal lovers. Right. So Meta does have policies that are pretty new about
AI-generated media. But Facebook does not appear to be enforcing these rules very consistently. And they're basically saying, well, look, we're working on tools that can automatically detect AI-generated content. But this research project at least suggests that they're not having total success. Yes, I think that is what the study suggests. I
I think there's sort of like two avenues to pursue here. One is like, yes, it is obviously bad for scammers to come along and grow these pages and flip them. And I'm sure that meta will, you know, try to fight back against that as best as it can. Although, you know, at the end of the day, there's never going to be anything that stops somebody from creating a page that gets popular. That's kind of what the whole site is set up to do, right?
The trickier question, I think, is how do we feel about these images in general? How do we think about them as items in the feed? What are they doing to our general sense of reality? So do you have like an emotional reaction to the flood of these images disconnected from the kind of scam of it all? I do. And in part, that's because, you know, we now have seen reports that a lot of the people who are being served these images are older people, are seniors, are people who, you know, maybe aren't the most
sophisticated and discerning consumers of online media and who may actually be thinking that these things are real. And look, the stakes are not existential here. We're not talking about political misinformation. But I think this is a good sort of proof of concept for how something like political misinformation could take off. I mean, I saw on Facebook recently, someone posted this AI-generated image of basically this
underground city underneath the Capitol. Have you seen this one? Yes. So this is clearly fake. There is no underground city beneath the Capitol, but this is the kind of thing that conspiracy theorists have been talking about for years. You know, there are these secret tunnels that allow, you know, members of Congress to traffic children and they're located under the National Mall, things like that.
And look, you could look at something like Shrimp Jesus and say, wow, this is really funny and kind of harmless that these pages are duping people with these AI-generated images. But it's a very short hop from where we are now with Shrimp Jesus to something that actually does catch fire and does mislead a lot of people about something important that is maybe related to the election. That's kind of where I come down on it is that, you know, bit by bit,
People are learning not to trust their eyes anymore. And this is really kind of the place where it starts. You were just browsing your Facebook feed. You thought you saw a cool dude who whittled a hyper-realistic version of his dog. And you thought that's the coolest thing I've ever seen. And you shared it. And then eventually someone pops up in your comments and says, hey, dummy, that's a fake.
And that's only going to have to happen to you a few times before. It doesn't matter what you see in your feed. You are going to stop believing your own eyes. And so while I don't want to over-dramatize this because this is mostly just a funny story, there is a double edge to this and it is going to buy this, I think. Yeah.
I also think this is just a pretty predictable result of two things that have happened. One is the just absolute proliferation of these tools for generating fake images and the fact that it's very hard to use technology to detect these images. It's not impossible in some cases, and Facebook and Meta have said that they're building tools that will allow them to automatically detect this stuff. But
It's never going to be perfect. They're never going to be able to catch everything. And at least in so far, it doesn't seem like they're trying all that hard. But I also think this is what happens when you deprioritize news on a platform. We know we've talked about on the show that Facebook for years now has been saying we're going to show people less news in their Facebook feeds because they don't want the blowback. They don't feel like they can, you know, responsibly serve that content to people. They don't think that, you know, their users want as much news in their feeds.
But when you actually deprioritize news, in my opinion, what happens is that you don't actually get less news. You just get more shrimp Jesuses. You get more people who are sharing dubious things that maybe look like news. Maybe they have a link to some site that maybe looks like a news site. People are still interested in what's going on in the world. But if you deprioritize news from trusted publishers, you will just get a lot more of this schlocky AI generated garbage.
I think that's right, Kevin. And that's why I'm actually excited to announce my new venture, which I think is going to get around this problem and kill two birds with one stone. What's your new venture? Starting later next month, I will be debuting on Facebook, the Shrimp Jesus Gazette, a sort of newsy diary of all things Shrimp Jesus. And my hope is that that will entertain boomers for days to come while also, you know, feeding them a little bit of their vegetables and we'll give them the top headlines from around the world. So wish me luck as I launch the Shrimp Jesus Gazette on Facebook. Yeah.
I love it. Do you think this is actually a risk for meta at all? Like, you know, their whole reason for existence for years has been to tell you what is actually happening with your friends, with your neighbors, with people in your community and with the world at large. If it just starts being sort of a dumping ground for all of these AI generated images and these sort of scammy and spammy attempts to sort of use engagement bait to get people's attention and redirect
them somewhere else. Do you think ultimately people will be turned off their products as a whole? I mean, it doesn't have to be, right? Because as you note, they have policies that are designed to flag this stuff. And I think as long as people know that what they're looking at is AI-generated, it's fine. In fact, I follow an account on Instagram that just makes these wonderful creations out of AI and
And I love looking at it. I mean, it's really weird stuff. It's, I would say, a bit more sophisticated than Shrimp Jesus. But there's nothing wrong with just being creative online. I think where you get into trouble is when you don't enforce the policies you have and it starts to kind of erode that sense of reality. But look, if you're asking me, do I think that there should be actual high-quality news in news feeds on Facebook and elsewhere? Yes, I always have. Yeah.
So we'll keep tabs on Shrimp Jesus and other AI-generated images. By the way, do you know what you call Shrimp Jesus and his followers? What's that? A shrimp cocktail. Come on. ♪
Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.
Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.
Before we go, a quick update to an interview we did here a few weeks ago with the great New York Times reporter Kashmir Hill about the car companies who were gathering all sorts of information about how you were driving your car, your braking, your mileage, the dates and time, and selling that to data brokers who then sold...
something called a risk profile of individual drivers to insurance companies. And then, of course, their insurance rates went up. Well, following that story, General Motors says it is no longer doing that. So no more GM snitch mobiles on the road. Well done, Cash.
Hard Fork is produced by Rachel Cohn and Davis Land. We're edited by Jen Poyant. We're fact-checked by Caitlin Love. Today's show was engineered by Chris Wood. Original music by Marian Lozano, Rowan Nemisto, and Dan Powell. Our audience editor is Nell Gologly. Video production by Ryan Manning and Dylan Bergeson. You can check out the whole show and little extras that are fun on youtube.com slash hardfork.
Special thanks to Paula Schumann, Pui Wing Tam, Kate Lepresti, and Jeffrey Miranda. You can email us at hardfork at nytimes.com. What's your favorite version of shrimp, Jesus?
Earning your degree online doesn't mean you have to go about it alone. At Capella University, we're here to support you when you're ready. From enrollment counselors who get to know you and your goals, to academic coaches who can help you form a plan to stay on track. We care about your success and are dedicated to helping you pursue your goals.
Going back to school is a big step, but having support at every step of your academic journey can make a big difference. Imagine your future differently at capella.edu.