cover of episode The People vs. Meta + Marques Brownlee on YouTube and Future Tech + DALL-E 3 Arrives

The People vs. Meta + Marques Brownlee on YouTube and Future Tech + DALL-E 3 Arrives

2023/10/27
logo of podcast Hard Fork

Hard Fork

AI Deep Dive AI Chapters Transcript
People
C
Casey Newton
M
Marques Brownlee
Topics
Kevin Roos 和 Casey Newton 讨论了多个州的总检察长对 Meta 提起的诉讼,该诉讼指控 Meta 故意创建诱导青少年和儿童 "长期、成瘾和强迫性社交媒体使用" 的功能。他们分析了此案的复杂性,包括对成瘾性功能(如点赞计数、持续提醒和可变奖励)、对儿童有害的功能(如促进身体变形症的过滤器、消失的故事和无限滚动)以及 Meta 内部对这些风险的了解的指控。他们还讨论了此案与针对大型烟草公司和大型制药公司的诉讼的相似之处,以及证明 Meta 产品直接导致青少年心理健康危机所面临的挑战。他们认为,此案中关于数据隐私和数据保护的部分可能比关于成瘾性和有害功能的部分更强有力,因为这涉及违反《儿童在线隐私保护法》(COPPA)。他们还讨论了美国缺乏全面的互联网监管,以及诉讼是否能有效约束科技公司。最后,他们表达了对青少年心理健康危机的担忧,并呼吁制定更全面的互联网监管框架,而不是仅仅依赖于对 Meta 的罚款。 Marques Brownlee 详细阐述了 YouTube 平台的演变,以及创作者如何适应平台变化和算法调整以保持成功。他分享了自己在 YouTube 上的经验,从最初的业余爱好到如今的专业科技评论员,并讨论了 YouTube 算法的演变以及如何优化视频内容以获得更好的效果。他认为,制作高质量的视频、关注观众需求以及适应平台变化是成功的关键。他还分享了他对未来科技的看法,特别是对增强现实(AR)和虚拟现实(VR)技术的兴趣,以及对电动汽车技术的快速发展感到兴奋。

Deep Dive

Chapters

Shownotes Transcript

Translations:
中文

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

I'm Kevin Roos, tech columnist for the New York Times. I'm Casey Newton from Platformer. And you're listening to and watching Hard Fork. We have to probably change that, huh, now? Oh yeah, because it can't just be you're listening to Hard Fork. Yeah. You're consuming Hard Fork. You are platform agnostically making your way through the Hard Fork media product. You are experiencing Hard Fork. You're experiencing Hard Fork. I like that. Or what about this is Hard Fork? That's a great one. I like that. Let's see. I want to hear you say it.

I'm Kevin Roos, a tech columnist for The New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. That sounded amazing. Yeah. I think that's just it right there.

This week, for the video debut of our show on YouTube, a major new lawsuit against Meta claims that social media is addictive and harmful to teens. Then, YouTube legend Marques Brownlee, aka MKBHD, joins the show to give us video tips and share his thoughts on future tech. And finally, Dolly 3 is here. We'll look at how quickly AI image generators are evolving.

So, this is our first ever full YouTube episode. We've been talking about making a YouTube show since we started the podcast. Yeah, I mean, the number of emails that we get demanding to sort of see us in living color was through the roof, and we're so excited that now we actually get to do it. Yeah, so this is our first YouTube episode, very excited. Basically, how this is going to work is we're going to put out the podcast on Fridays, as we usually do, and then a day later on Saturdays, the YouTube version will come out, and it will be...

essentially the same show. So if you listen to the podcast, you don't also have to watch the video. Although if you do, we will award you extra points. You will, don't do that. Don't do that. Life is too short. Well, I mean, look, different people lead different lives. There are some people who are going to sort of want to listen as they normally do. And then they'll wake up the next on Saturday morning and they'll have a coffee and they'll want to sort of revisit their favorite moments from the previous day's episode. That's true. My kid does like to rewatch like Cocoa Melon episodes. So yeah, we could be the frozen of the podcast universe.

children going nuts for us. All right. So on Tuesday of this week, Meta was sued by more than three dozen attorneys general representing various states. And I want to talk about this lawsuit. And I think we should focus on the main lawsuit. There are actually a couple lawsuits filed. It's a little confusing. Right.

But basically the main one that was filed in federal court was led by California and Colorado. That's the one that I read and that I think we should talk about. So this is already being compared to kind of the lawsuits that

States filed against big tobacco companies, big pharma companies, and basically what these AGs are. Is it AGs or ASG? I believe it is AGs. Okay. Which is counterintuitive because there's an argument that it should be ASG. It should be ASG. It should be ASG. You know what?

We can have our own house style around here. If you want to say A's G, that's fine with me. That's right. Okay. So all these A's G allege that Meta had this kind of long-running multi-part scheme to keep kids hooked to their products and services. And this sort of comes out of a wave of state attempts to legislate and regulate tech companies, especially when it comes to children and teenagers.

So let's just break down this lawsuit a little bit because it is kind of complicated. Yeah, I would say that there are kind of two buckets of complaints. The first and much larger bucket is absolutely modeled on the lawsuits that we saw against Big Tobacco and Big Pharma. And those lawsuits were ultimately successful, right? And so I think that's why they're being used as a model here. One of the things that all those lawsuits have in common is that they have to do with addictive products, right?

Another is that there are health concerns associated with pharma, tobacco and social media. And then the third is that there was an internal knowledge about the risks that was not shared with the public, even though the people who were making MetaZaps knew. Right. The people at Meta working there knew that these products had some harmful effects on young kids and said so in

in their sort of employee forums, but still the company persisted in going after young users. That's sort of the allegation here. Right. So let's talk about the kind of like addictive features. So some of the features mentioned in this lawsuit are things like like counts. So you can see how many people liked your Instagram post.

These persistent alerts and variable rewards, you know, push notifications show up on your phone, keep you coming back to the app, that this is sort of designed to produce dopamine responses and that, you know, teenagers are especially susceptible to this. Filters that promote body dysmorphia, disappearing stories like on Instagram, infinite scroll, and getting rid of chronological feeds so that posts with more engagement are seen first.

Now, what was your reaction to seeing these features listed in this lawsuit? It felt like the ASG had just discovered apps for the first time. You know, it's like, have you used anything on your phone? Was this the first push notification you ever got was from Instagram? You know, so look, I don't want to make too much light of it because something that I do believe is that for some

group of users, some group of young users in particular, using social media can be associated with harm. It can create harm. It can exacerbate harm, particularly if you already have mental health issues, right? If you're using a social media app for more than three hours a day, according to the Surgeon General this year, you are at much greater risk of harm than other folks. So I do want to take that very seriously.

And at the same time, I want to acknowledge that there is no regulatory framework that guides how you build apps in this country, right? There are many apps that have likes. There are many apps that have rate feeds. There are many apps that are sending push notifications. And so for the ASG to come along and say, well, you can't do that, I do think they're going to have a hard time selling that in court.

Totally, especially because Meta did not invent a lot of these features, right? They didn't invent the push notification. They didn't invent the infinite scroll. So at a minimum, I think if you're going to go after Meta for having these kinds of features, you also have to go after other social apps that are popular with kids. You know, I've talked to some people who think that that's sort of what's going on here, that the states are sort of sending a signal, hey, we're going to be going after

after all of the apps that are popular with kids that have these features. And so if you're Snapchat, if you're YouTube, if you're TikTok, you're going to be looking at this case and saying, wait a minute, maybe we should stop using those features.

I mean, yes and yet, at the same time, you know, social networks are a business that tend to decline over time, right? If you run a social network, you're always having to pull a new rabbit out of a hat just to get people to look at you, right? You know, the reason that Meta added stories to Instagram was that Snapchat was starting to take off. And so it's, oh, no, now we have to sort of change everything. Then TikTok came along, and all of a sudden it was like, eh,

We need to start ripping things out of the app and put in short form video everywhere. So these apps are always changing. They're always having to add new things. And they're always sort of having to wave at you and say, hey, come back and look at this thing. So I think if you run a social network, you're looking at this lawsuit and you're saying, I actually...

Right.

Right. But at the same time, part of what's being alleged here is that even after Meta knew that these features were particularly appealing and good at getting young users to come back and knew that there were harms associated with some uses of their products for young users, they kept pursuing that user base. And I think

that's not unique to them. Every tech company, you know, wants to have young users because young users are going to be on your app a lot. They tend to sort of drive culture and influence other users. They also buy things. So it's a very coveted demographic, but that's also sort of where Meta has gotten into trouble because they were going after those young users. And this is now at the heart of this complaint against them. Right. And I,

think where it will become legally difficult for Meta is not just if the ASG prove that Meta was trying to attract younger users because as you point out lots of companies try to attract younger users with their products it's going to be you knew that younger users were uniquely vulnerable to a provable

mental health harm, and you marketed it to them anyway. Casey, what was in the redacted parts of this lawsuit? What do you think? It was truly just blanketed by these redactions. So I'm going to assume that it is a lot of internal email, internal documents, data from some of the research that the company has done.

And if I'm one of the ASG, well, I guess the ASG, no, was in the lawsuit. But if you're somebody who hopes that this lawsuit succeeds, what you should hope is that all of those redactions are just evidence to support the claims that are made in this lawsuit. The lawsuit is written in a way that makes metal look almost cartoonishly evil, right? You know, this sinister plot to try to get a young person to look at Instagram as if it were trying to, like, entice them into the witch's house and Hansel and Gretel, you know? So...

But again, maybe the data is in there, and we're going to read this unredacted complaint, and we're going to say, holy cow, this is super bad. As it is, it's a bunch of claims without a bunch of evidence to support it. Right. So that's the first bucket, the kind of features that harm children and addict them and get them coming back to our apps. Let's talk about the other... No. What? Well, I think we should go one note further, Kevin, because...

There's something that I think is going to be really controversial when this thing actually gets debated, which is can the AGs – can the ASG prove we really got ourselves into trouble with this new house title? Can we go – let's go back to AGs. Okay, back to AGs. Okay.

The AGs are going to have to prove a sort of direct harm here, right? If you're an AG prosecuting a tobacco company, you had really amazing evidence that smoking caused lung cancer, right? If you were an AG prosecuting a big pharma company, you had really good evidence that opioids were way more addictive than they had ever been marketed as and that that was causing horrible harms in people's lives, right?

If you want to prove that like Instagram as is currently built is a primary driver of the mental health crisis among teenagers in this country, which is a real mental health crisis, you just have a lot more work to do. Okay. That is not something that there is a lot of

consensus on among even the people who spend the most time researching this subject, which again is not to say that some people do not experience harms because they clearly do. But if you're going to say that meta is essentially a linchpin of the mental health crisis in this country, which I think a lot of these AGs really want to make that case, then they're just going to have to bring a lot more evidence than we have seen so far.

in this lawsuit. Totally. I will just say, like, we don't know what's in the redacted portions of this lawsuit. It could be incredibly damning things from inside meta that will look very bad if they get out or to the people who evaluate this. But I will say, I think that there is a perception of harm thing here that really does have a lot of power. I just watched the Netflix documentary about Jewel. Have you seen this yet? Jewel the folk singer? Yeah.

No, Juul, the vaping device. Oh, yes, of course. Beloved by teens. So this documentary, this docu-series, I guess, called Big Vape, highly recommend it. I was thinking about that while I was looking through this lawsuit because there, in that case with Juul, you had a company that had made something that actually did have

both positives and negatives, right? Like it did help people quit smoking, but Juul made a fatal flaw, which is that they marketed to kids. Right. Right. And then they said that they weren't marketing to kids. And it's like, well, why is this vape branded with SpongeBob animations? You know?

Right. They marketed to kids. They hired, you know, these cool looking fashion models to like make, you know, ads for them. And in this country, parents get mad as hell if you market something to their kids that turns out to have harmful effects on them. Yeah.

And look, I'm a parent. I get that impulse. I think the people at Meta didn't realize that if parents turned against them and started to feel like their products were harming kids, even if the evidence for that harm was kind of shaky, it actually wouldn't matter. It was going to be game over. Parents were never going to forgive or forget that.

And that perception alone of you are a company who's marketing something to kids that you know has harmful effects on at least some of them was going to just be a fatal flaw.

And so I don't think the company saying like, oh, well, the data is inconclusive and social media is actually good for some adolescents. Like, I just don't think that's going to help them at all. It clearly doesn't have very much emotional power, right? It doesn't have nearly the emotional power of the stories that we've heard and that we have featured on this podcast of teenagers saying that this app caused a real problem in my life. And I do believe those stories and they are going to be a problem for meta.

Now, I should say here that we did ask Meta for a comment, and it said, quote, We're disappointed that instead of working productively with companies across the industry to create clear age-appropriate standards for the many apps teens use, the attorneys general have chosen this path.

Right. I think the stronger part of this lawsuit is actually about data privacy and data protection, because we actually do have a law in this country, COPPA, the Children's Online Privacy Protection Act, that prohibits tech companies from collecting data from users under 13 without their parents' consent. Right.

And, you know, what Facebook and Instagram and Meta have said is, well, we make people put in their age before they register for an account. We don't want underage users on our platform. And if we find out that they're on our platforms, we kick them off. But what this lawsuit says is basically, well, that doesn't work, clearly, because there are still millions of underage users on your platforms. And you actually haven't tried it.

hard enough to get those people off their platforms. What do you think about this part of the lawsuit? Yeah, so I think this is just a much stronger part of the lawsuit, in part because most platforms do just have people under 13 who are using them. It is a time-honored part of American childhood to use the internet without your parents' permission, and the 13-year-olds are going wild, okay? I'm sorry, the 12-year-olds are going wild. So here's what is interesting to me about the COP piece.

A few years back, Instagram said it was going to work on a special version of the app for kids under 13. I remember this. And this caused a big sort of emotional reaction that said, wow, that feels like really, really icky, right? I was somebody who felt that way and said so at the time. And what Instagram said in response was, look,

you have no idea how many kids are trying to get onto our platform, are successfully getting onto our platform. It's one of those where it's like, if you're going to drink, I'd rather you do it in the house where I can watch you. That was the sort of logic of Instagram building an app for kids under 13, right? Which is sort of what YouTube does. They have YouTube...

and YouTube kids. Yeah, that's right. Who knows what Instagram kids would have been like? There's also a Messenger kids app, by the way, that Facebook makes and is for kids under 13. Why do I bring all this up? Well, look, we know the company has admitted that it has a problem with these under 13 users. Now, I think what the company would say was, yes,

And we are one of the only companies that was trying to do something meaningful about this, right? Everybody else just wants to pretend that this isn't an issue because, you know, a group of dozens of attorneys general are not going to show up at the door of the average website because it happened to have some like 12 year old users. But if you get into trouble for something else,

they can come along and they say, hey, do you have any 12-year-olds on your platform? Were you collecting data about them? Well, now you have a problem. So this is kind of like getting Al Capone on tax evasion, right? But like, I do think they're probably going to get them. And I would say that the odds that Meta escapes this lawsuit without having to pay some sort of fine, probably heavily related to the COPPA violations, are small.

Yeah. So is that the remedy here? Like, is that what's going to happen at the end of this? Because there's one version where, you know, they just pay a big fine. They've paid a bunch of big fines over the years. They have a lot of money. They keep operating. It's sort of cost of doing business for them. But I think there actually is a chance. I don't know if it's a big chance or a small chance that this lawsuit will succeed in doing more than just fine.

finding the company will actually require them to scale back on some of their features, to change how they do age verification. Do you think any of that's going to happen? Or do you think it's just going to be like...

slap on the wrist, cut a big check, and move on? I think it's really hard to answer this without seeing the full complaint and without starting to see it litigated a bit more. Again, maybe there is evidence of direct mental health harms on teens that we just haven't seen before that is buried somewhere in the redacted portions of this lawsuit. For the attorney's general... For the attorney... For the attorney general... For the attorney's general's sake! I hope there is, Kevin! Yeah.

I hope there is that evidence because if it is not there, then they are in the position of having to prove some...

pretty explosive claims using some pretty flimsy evidence, right? And if that's the case, then yes, I think this probably just becomes a settlement over some COPPA violations. And I think that would be sad. And here's why. We do have a crisis with teen mental health in this country. I was reading the CDC reports yesterday, and you're looking at the statistics of the number of young girls in particular who are going to emergency rooms, right, who are contemplating suicide. It's

It's just really, really awful. And there is a lot of debate over the exact causes of it. And again, I think that, yes, social media is playing a role in this. And I think social media companies could absolutely be doing more to protect these kids, right?

I don't really think this lawsuit gets us here. And the reason is because we just have not written rules of the road for these companies, right? In the whole backlash to big tech that's been going on since 2017, the U.S. Congress has not passed a single new meaningful piece of legislation that regulates the way that any of these tech giants operate, right?

When I look at what's happening in Europe where they passed the Digital Services Act, that at least begins to lay out some rules of the road. It begins to say, here's what you have to do about harmful online content. Here's what you have to do about disinformation. Here are some ways that you have to be transparent about what you're doing so that outside observers can sort of get a sense of

of what you're doing. And the DSA at least speaks to the idea that amid all of this, individual users should still have some rights to free expression, right? That we still actually do want people to be able to get on the internet and post and talk about their problems. And hey, maybe for an LGBT kid, you can meet another LGBT kid online. And maybe that's a positive connection that you can have in your life that helps you out of a tough spot, right? So,

I think Europe is sort of leading the way there. And I wish the United States would say, you know what? We actually need to create our own regulatory framework. Maybe we don't want 16 year olds to see likes on their Instagram posts. Maybe we want to mandate screen time limits for children the way they do in China. I think that would be wild, but we could absolutely do it. Right. But let's actually get together and make some rules of the road, because if we do, then we can have a much bigger impact than just finding meta. We could improve the entire social media industry.

Yes, I buy that. Thank you. Are you running for president on that platform? Well, I've been thinking, how far do you think I could get with that platform? I think you could make it to the primaries. Okay. I think you could pull in 10%. I could make it to literally the first step of a presidential. We gotta get some signatures first. That's a good point. What would you like to see out of all this? I would like to see tech companies, including Meta, but also all the other ones with young users. I would like to see them...

think a little bit harder while they're designing products for young people specifically. Like I want them to feel a little bit of fear, a little, little tingle on the back of their neck before they roll out a new feature that is aimed at younger users. Not because I don't think young people should be allowed on the internet or they should, they should have a vastly, you know, a different experience than adults, but just because I want them to be taking that extra sort of burden of care and

And I want them to be sort of a little scared of violations they might be committing by putting more addictive features in the app aimed at kids. And I think that makes a lot of sense. I think it'd be interesting to imagine what cable television and what broadcast television would look like in a world where the Federal Communications Commission didn't exist, right? And where it hadn't laid out what you're allowed to show. There are rules around educational programming and what times of day certain things can air and what kinds of content are allowed to be

to be shown at certain times of day. And the nice thing about that is we don't have to rely on ABC and CBS doing the right thing. We just know that the FCC is looking over their shoulder. So it would be great to see something like that on social media. Totally. All right, let's move on. Yeah. When we come back, Marques Brownlee of the hit YouTube channel MKBHD teaches us how to become YouTube celebrities.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret. Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

So Casey, we have a very special guest today for our first ever YouTube show. We are kicking off our first ever YouTube show with a YouTube legend. Yeah. So Marques Brownlee is a very popular tech creator on YouTube. His channel MKBHD has been going on for more than a decade. He's got 17.7 million subscribers, which is slightly more than the Hard Fork channel, but not for long.

and he is sort of the person whose channel I watch most on YouTube when it comes to new technology, new gadgets, new phones. Whenever I want to know what is the latest and greatest piece of technology out there, Marques' channel is the one that I go to. Absolutely. As somebody who has also been watching MKBHD for a long time, not only have you seen Marques grow up on his own channel, he's been doing it for 15 years since he was a teenager, but you have just also seen him

evolve, right? And Marques has had to adapt to that. Every year, his shots get a little bit sharper. The tech in the podcast is a little bit better. He now operates out of this magnificent studio. And so just watching the way that he has grown, both on the technical side and as a creator, has been fascinating to watch and I think just offers an incredible template to anyone else who is wondering, how do I start a YouTube channel? How do we get really, really good at this? And his

become like a legitimately big deal in the world of tech. Like he's now sort of the success of his YouTube channel has kind of made him a celebrity among tech companies and tech leaders. You know, he's interviewed Elon Musk and Sundar Pichai and

I think it's just like a great way to start our YouTube channel by talking to the person who I would say represents sort of the pinnacle of what tech journalism can be on YouTube. Yeah, it's great to start a new project with somebody who is so successful, you just know you'll never get anywhere near that level and just sort of really sort of misalign those expectations. ♪

Hey, Marques. Hey. Hey, how's it going? I'm immediately struck by how much cooler your studio looks than ours. It's true. My goodness. We got a lot of light going on here. Great light. You've got like a red pop screen on your microphone. Yeah, we pull out all the stops for hard fork. You know, it's high-end stuff here.

So this is our first ever YouTube episode, and if you were directing us in our channel, would you give us any notes? How's our background? How are we looking? I think we're looking pretty good. I like to just jump right in. I feel like if you're a viewer, you usually skip the sort of intro shenanigans a little bit, just get right in. So I think clipping right to action, that's what you want to do, you know?

I will say, one of my favorite things people do on YouTube is when they put in the little chapter marks and they say, this is the introduction. And I say, perfect. Now I don't have to watch that. And you can go right to where it gets into it. And that's the biggest spike. Well, it's like if you're watching a video about how to roast a chicken and there's a three minute introduction, I don't actually need to watch that. That's true.

So, all right, let's skip to the action. Let's get to the action. So one of the reasons we were excited to talk to you is because you've seen YouTube through almost every iteration. I mean, I went back and watched some of your first videos that you posted about 15 years ago when you were reviewing things like HP Media Center remotes. I think you were like 15 at the time. So talk to us about the earliest part of your YouTube career. What was YouTube like back then?

back then and sort of what made you excited about posting videos on the platform? Yeah, I mean, I guess I've heard it described as sort of the Wild West. But looking back, it's never been a more accurate description. Like back then. So this is 2009. It was truly nobody's job. There was nobody who was a professional. That wasn't a thing back in these days. So it was really just like I was in high school and I had to buy a laptop. And so I'd like watched every other YouTube video in the world on that laptop just because like

you know, it's my allowance money. I might as well do the research. And so I got the laptop.

And then I guess I found a couple features and a couple things about it that I didn't see in those other YouTube videos. And so the natural response for me, a kid who had watched a bunch of YouTube videos, was, oh, I guess I'll just make a YouTube video so that someone else who buys it knows. And so that's what I did. I just turned the webcam on and just started with the media center remote that nobody had told me about in the others. It was just kind of a fun thing to do when I got home from school instead of homework. And I had about 300 videos posted

before I had 12,000 subscribers and I hit my first million video views. So YouTube started expanding. The partner program started sharing ad revenue with more and more creators around the time that your channel really started to blow up. And I've heard from other creators that that moment was sort of a big shift in the platform where suddenly people started to take seriously the possibility that they could actually make a living doing YouTube videos, that it wasn't just sort of a fun thing. It wasn't a hobby. It could actually be a

So what's your memory of kind of that stage of evolution where you could actually get money from YouTube for making videos? Did that change your approach to the platform? Did it kind of affect the ecosystem? What do you remember about that time? Okay, there's a lot.

to that moment in YouTube history. I think for myself, I didn't really see it as that much of a difference. My channel was growing, yes, but it was like the difference between $0 and $7 at the end of the month. So it was like, you know, it's neat, but it's not a job or anything like that. I'm not telling my parents, like, this is it. What I always like to say is the best thing that never happened to me was some video like going mega viral. Because I think with a lot of YouTube channels,

They do their thing for a little while and then something pops off and gets 100x their normal views. And what happens at that point is they kind of start chasing that again. They start trying to redo another version of the thing that popped off or just suddenly that's the theme of your entire channel. And

Thankfully for me, it was, I'm into tech. There's all these tech topics to talk about, all these things to make videos about. And people seem to really be interested in that. So the growth was very steady and very organic the entire time. Right. All right. So Kevin, the first thing that I'm learning from this is we cannot do a crazy viral video. Okay. We cannot just go. The worst thing that could happen to us would be like getting a hundred million views. Yeah. Please don't view our videos a hundred million times. Do not share this video with your friends. Yeah. We would really hate that.

But I think that you've touched on something really important, which is I think there are sort of two approaches you can take to a platform like YouTube. You can approach it as kind of an art or a science, right? And you'll hear people talking. You know, I remember hearing Mr. Beast talking about how he'll try 500 thumbnails for one video and really like see the results of the tests and which one gets clicked more and then use that one or changing titles of videos. There are people who really approach this as an optimization.

optimization problem? And it sounds like you don't see it that way. I'd say I've come around on the benefits of optimization, but it's not the primary thing. So I think, you know, if you just look at a normal tech video, like what are people watching it for? I'm here for the information. I'm here to know if I should buy the thing or not. So my primary objectives are still to satisfy those tenants of a good video, but

But if you ignore the rest, which I probably did for a little too long, things like a really good title or a really good retention strategy or a really good thumbnail. If you ignore those things, you are missing out. So yeah, over the past few years on YouTube, I have thought a lot more about such optimizations, I guess I would say. So I used to literally just pick a thumbnail as it was uploading. Like I didn't really think too hard about the thumbnail strategy.

But I think if you talk to YouTubers now, it's kind of flipped on its head. It's like, I think about a title and a thumbnail and then make the video. So I'm kind of mixing that. I think it's fun to play with like how you optimize what you've already made versus how you optimize your whole channel and start to make things for that optimization. Everyone's going to be in a different place on that spectrum. You talk about Mr. Beast, he's on the extreme end. So I've had to think about that a little more, definitely just to make sure we're getting our stuff out there.

- Marques, one thing that I have heard from YouTube creators over the years of reporting on this platform is that when you're a big YouTuber with a big channel, you really sort of feel what you could call like the YouTube meta changing, like what kinds of videos are rewarded, what performs well, what the algorithm is doing.

Big creators, I think, have a really innate sense of that. I remember interviewing PewDiePie a few years ago, and he was telling me about this time where it was like edgy videos were being really rewarded. So everyone was chasing edgy humor and edgy memes and trying to figure out where the edge was. And then YouTube changed the meta, and suddenly it wasn't good to be edgy. You weren't going to make as much money or get as many views.

It sort of felt like describing kind of, yeah, like riding this sort of wave that just keeps shifting underneath you and having to be really attentive to that. Do you feel that? Like how much are you thinking about the sort of YouTube meta when you're making videos? And what do you feel like the current meta of YouTube is? I'll put it this way. What PewDiePie describes as like edgy videos, I would always try to shift it to trying to explain it with...

the actual algorithm. And I think what he's actually saying is videos back then that had a lot of engagement that you could get people to comment on or like or dislike a lot relative to the average would be rewarded. And I think over time, YouTube has further and further refined their definition of a good video. You know, back then it was just like, hey, it's got a lot of views. It's got a lot of likes. It's got a lot of comments. It's probably a good video. Serve it up.

And I think over time they figured out more and more analytics to narrow and define what a good video is for a certain viewer.

And so you'll see those waves, as you talked about, like it's not just a lot of likes and dislikes. It's actually maybe more the ratio of likes to dislikes. Or maybe it's how early in the video did they comment or engage with it? Or maybe it's how long into the video did they wait before engaging? Right. So the algorithm continues to evolve over time.

And it gets defined in various ways, like, oh, YouTube doesn't like edgy videos anymore. I guess, but it's more just they got better at defining what a good video is. So if you're trying to be a creator getting ahead of the new waves, I would just think of it as trying to get ahead of how will YouTube define a good video? Yeah.

Yeah, it's really interesting. I was talking the other day to this guy who I think is the best video games critic in the world. He has a YouTube channel called Skill Up. His name is Ralph. And I was sort of asking him a similar question about building and growing a YouTube channel. And in particular, how much are you worried about the algorithm, the meta, all that? And Ralph just sort of waved it away. And he just said, honestly, just make good videos, which...

honestly is exactly what I want to hear, right? And it's like what I want to be true for you, Marques, and like for us is just like sort of show up and do something well. But it also feels a little bit too good to be true. But at the same time, like I am willing to just sort of take it from you that if you make good videos, the audience will show up.

And you can define to yourself what a good video is, and obviously YouTube will have one definition, yours might be a little bit different, but for a tech channel like mine, for example, I'm trying to provide value, be entertaining, and deliver the truth. Maybe those are my three pillars. And if I do all those things, then people will be happy with the video. And ideally, they engage with the video in a way that tells YouTube that it's a good video. So as long as I keep making what I think is a good video,

hopefully YouTube also still thinks it's a good video. Well, do we want to maybe shift to start talking about some of the tech that Marquez is fascinated with right now? Yes. Although I have one more question about YouTube. Should we start a feud with another YouTuber? And if so, who should that be for maximum views? You know, boxing matches are kind of all the rage right now. That's fascinating. You know, it depends on how much smoke you want. Yeah.

I would like to box the cast of the Verge cast. That could work. That's actually not the worst idea. Is that the same? You might need an extra person just because you got to be evenly matched at some point. No, I think no. Kevin and I could take all three of them. I've got about a foot on Neeli and he will be hearing from our promoter. Yeah, if you're listening, Verge cast, we're coming for you. Might be the move. It's on. Meet us in the octagon.

All right. Let's talk about some tech. So you have reviewed basically every gadget that has mattered over the past 15 years. There's been so much out there, but I feel like the smartphone ecosystem in particular has really been kind of stagnant. Like most advice that I see when a new iPhone or

Pixel or, you know, Samsung device comes out is like, you know, it's good enough. Like just by the latest or the second to latest, you know, edition of one of these phones. I'm curious, like, do smartphones feel like an exciting space still for you? Or are you kind of looking ahead to what kind of device might be next? Well, OK, so smartphones are fun because I like a good high end smartphone, but I also feel like they're clearly mature.

at least like the classic slab phone. We do have folding phones and that's pretty wild and that's kind of coming up new. But I think the way I think about it with tech is like, if you have that like early adoption curve of like tech exploding, early adopters buying in and then it sort of flattens out and stops improving as much as you'd hoped. Smartphones are like here, like,

The iPhone 15 is a little better than the 14, which is a little better than the 13. But we had that explosion at the beginning, which is really, really exciting. And I think every piece of tech is at a different point somewhere on this curve. And we're always trying to figure out what the curve is going to look like for some future tech. I think electric cars were right at the beginning. We clearly have like a lot of interesting first gen ones and we're going to get them over the future as they get really, really good.

I think AR, VR stuff is also pretty classic. I don't know if that's the sequel to smartphones, but we're also in this early adopter curve part of that. I'm still interested in smartphones because I think they're really, really advanced pieces of tech. But I also am very interested in the things that are in the early part of their curve because they're going to be fun to watch. You put out a video last week talking about mixed reality, this idea that we'll have kind of digital interfaces that just appear on top of our own devices.

Do you think that that form factor, the sort of headset that you wear that has passed through, maybe that'll come in a headset, maybe that'll be more like glasses, maybe it'll be like smart contacts or something like that. What form do you think most people will experience this stuff for the first time in?

I mean, it's hard to say. I mean, my theory from that video was that smart glasses are kind of starting on one end of the spectrum and VR headsets are starting out the other end. And I both feel like they have the same goal, which is to get you to a point where you wear something inconspicuous on your face and it augments your reality in some way.

Smart glasses, they don't really work if they look dumb, so they have to keep looking like smart glasses. So they just keep fitting as much tech as possible in normal looking glasses. And at this point, it's just like a little computer, a little battery, a camera and a mic speaker. And they're going to keep trying to add to that over and over again until they get to spatial computer on your face.

That feels a little harder than the other side, which is the VR headset, which every year is shrinking and getting smaller and smaller and lighter and better and better pass through until eventually you get to the goal of looking right through it and it augmenting your reality and you just got this thing on your face and it's got to get to the point of looking like a normal thing to wear on your face. So both of those are tough.

If I was betting, I'd probably put money on the smart glasses actually being most people's less reluctant purchase. It feels like if it looks like regular glasses, it's not as hard to convince you to try it. But it feels like they're trying to do the same thing. So I'm watching both. Also, like these things change. You know, AirPods did not look cool when they came out. People made fun of the way they looked, right? And now everyone wears AirPods and nobody thinks twice about it. So I think you can sort of never underestimate how quickly people's feelings about this stuff can change.

Yeah. What about AI hardware? We talked a little bit on this show a few weeks ago about all the companies that are trying to make devices that are specifically built for generative AI, whether it's a pin or a pendant or smart glasses that have AI built into them. What do you think the killer hardware product for this kind of AI is likely to be? It feels like it's whatever is as discrete as possible.

Honestly, you talked about smartphones earlier. Like the fact that everyone is always on their phone all the time. I kind of have a hard time remembering what it was like before everyone was on their phone all the time. Like you see those old pictures of like basketball games where like everyone's just watching the game. We read books.

There were these things called libraries. It's tough to remember those times. So I see, so everyone's got their phones out now. Everyone's looking at their phones all the time. And that's like kind of how we see things. It's just as hard for me to picture a next thing where everyone's on a new device all the time and we just leave our phones behind. But if they don't,

take up too much extra mind space if they're just a thing you maybe it's a clip I don't know maybe it's a on your glasses that you already wear maybe that's the best way of sneaking it into being a functional part of your life but

Yeah, it's tough to say. I think we are going earbuds. I think we're going full Samantha from her. I think that is where this technology is going to head. Do you see Mrs. Davis this year? No. This is sort of another, it was a Peacock original, I believe. Super good. And the premise is basically that the entire world is connected through earbuds and speaks to an all-knowing AI. That's interesting. Yeah.

Marques, last question. We wanted to end by making some predictions. So where do you think we'll see things going? Let's just say in the next year or two, what excites you right now when you look at the world of tech? So the next two years, I think you can safely bet on them being the most exciting years for VR and AR headsets and for electric cars. Those are like the two most exciting years.

like emerging technologies that I see being super, super interesting electric cars, first of all, because like the battery tech and all of the tech gets so good, so fast that the cars that come out in two years are going to make today's look terrible. So that's great.

And then, of course, when you get, you know, these headsets, when Apple dives in, you know, it's about to take off. Like, I think a lot of these companies trying to be innovative and be the first mass market VR or AR product is going to be really interesting to watch, especially as the smart glasses kind of pop off at the same time. So those are the two things in the next two years.

that I would keep an eye on. Got it. So don't buy an electric car for two years. That's what I'm taking away from this. Or just buy one and trade it in and get the newer one. Wow, listen to Mr. Fancy. Or lease. Or lease. A lease is a great option. Yeah. Well, Marques, thank you so much for coming on Hard Fork. Really great to talk to you. And yeah, we'll see you on YouTube. Yeah, for sure. Thanks for having me on, guys.

When we come back, AI image generators and Casey's experiments using DALI 3 to make bulldog mad scientists. And try to find a better image for you. This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.

Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

So Casey, when we talk about AI on this show, which we do once or twice, we spend usually most of our time talking about text generators like ChatGPT and BARD, et cetera. But we've really been sleeping on, I would say, image generators. And we talked about them a little bit last year when DALI came out and stable diffusion and all these tools. But then I really feel like we didn't hear much. But you recently spent a bunch of time with DALI 3. Tell me about that. Yeah.

Yeah, so I'm somebody who was very interested in these text-to-image generators right when they came out. They came out before ChatGPT. To me, it just sort of felt like magic. You would type bulldog in a firefighter costume, and then suddenly it would materialize, and it was just truly delightful to me.

I was using DALI, which is OpenAI's version of this product. But then, as you mentioned, a bunch of other ones came out. There was Stable Diffusion. Mid Journey came out. At the same time, though, ChachiBT had also come out. And I thought, well, I got to go figure out that. So I kind of took my eye off the ball. But then...

Dolly 3 came out and as you say, I had a chance to spend some time with it and the pace of improvement there is really something. Yeah. So Dolly 3 is the latest version of OpenAI's image generator. They officially released it last week through ChatGPT+. If you pay for ChatGPT or if you're an enterprise customer, you can now use it. Previously, it was available to like a small group of beta testers.

and you can also access it on the new Bing. And that's important because the Bing image creator is free. So if you create a Microsoft account, you can use Dolly 3.0.

Right. So tell me about some of the experiments that you've been running. Okay. Well, so I guess I should just probably pull up my little Dolly folder here. Let me pull up something I made last year in Dolly 2 that came out last year. And Kevin, can you see this? Yes. This is a series of looks like monkeys in firefighter outfits. That's right.

That's right. And the prompt for this was a smiling monkey dressed as a firefighter, digital art. And at the time, Dolly 2 would make you 10 images, which it no longer will make you that many. But, you know, I think that these monkeys look pretty good. I think you can notice that

The faces are in kind of some somewhat weird shapes. There is some blurriness around the edges here. They all kind of look like slightly melted candle versions of the thing that they are trying to be, right? And then last week, I used the same prompt in Dolly 3. One of them looks like...

almost like photorealistic, like a person in a monkey costume who's also in a firefighter outfit. One looks like kind of like a 2D cartoon. Yeah, they're just sort of very different visual styles. So what is happening under the hood here? So under the hood, Dali is rewriting the process

Prompt. So the prompt for this one is photo of a cheerful monkey in firefighter gear, wearing protective boots and holding a firefighter's ax. It's standing next to a fire truck and then sort of goes on to describe other things. So you put in just like monkey firefighter. I used the same prompt that I used for Dolly 2. And it sort of used its...

AI language model to expand on that prompt and like make it into a much more elaborate prompt and then render that prompt rather than the thing that you would actually put in. That's right. And so you just wind up making these much more creative images and it can be quite fun to see what Dali combined with ChatGPT is going to make out of your input. That's really interesting because it also like I remember when, you know, MidJourney came out and I would go into the MidJourney Discord server and

And there were all these like, you know, amateur prompt engineers in there who would just be putting in these very elaborate long prompts with all these like keywords they had discovered to like make their images look better. So what you're essentially saying is like, that doesn't matter much anymore because the program is going to rewrite your prompt to be better anyway. Exactly. And it'll be in a bunch of different styles. Maybe one of them will be photorealistic. The other one will be an illustration from a style of the 1940s. And it will just kind of throw a bunch of stuff at you and a benefit.

of this is it just teaches you about what the model can do. I think AI has a problem with these missing user interfaces where for the most part, they just give you a blank box to type in and then it's up to you to figure out what it might be able to do. This is one of the first sort of product design decisions that says, oh, we're actually just going to make a bunch of suggestions on your behalf and that over time will teach you what we can do. Can you say like, don't adjust my prompt? Can you just say like, actually render what I put in or does it

always automatically rewrite your prompt to be longer and more elaborate? By default, it will write a longer prompt if you've written a short one. If you write a longer prompt, it will just show you that. I have had some luck with saying, like, make this exact image, and then it will sort of do less editing. And so, you know, if that's the experience you want, you can have it. I've just been sort of continually delighted by the rewriting it does. In fact, can I just show you some of these images that are...

So, like, one of the first images I made last year was, like, a bulldog mad scientist. And it gave me some, like, pretty good bulldog mad scientists, but had all the same problems kind of that the monkeys did. And then I used Dolly 3 to make a bulldog mad scientist. And I thought the results were just, like...

kind of mind-blowingly good. That's pretty good. Like, they're incredibly rich with detail. They're very colorful. Like, I could see this on the cover of Bulldog Mad Scientist magazine, and you might not even know that it was AI generated. And the prompt used was literally just Bulldog Mad Scientist? It was not very much longer than that, but then ChatGBT rewrote it to, you know, talk about the colors and the lighting and the style and all of that. And, you know, I will say that, like,

this kind of thing might not have a lot of immediate practical applications. This is one of the reasons why we have not been talking about these image generators as much is, you know, unless you're in some sort of field where you have to constantly generate images or you just like being creative or maybe this is a fun thing that you do with your kids, you're probably not going to have a lot of reason to use Dolly 3.0

But I think that that has blinded us to something, which is that it's very hard to understand the improvement in language models because it's basically just a feeling, right? Why is ChatGPT 3.5 not as good as GPT-4? Well, I don't know. Just use GPT-4 for a little while and you'll know what I mean. Totally. When you use DALI 3 and you compare it to DALI 2, you can see the progress significantly.

that we have made in the last 18 months. And it is extraordinary. So my case for use one of these text-to-image generators that has one of the latest models is this will help you begin to understand how fast AI is evolving. That's interesting. I think there's another reason, though, why as cool as DALI 3 is, it is not really ready to be a professional media creation tool. And that's just

because the rules are very hard to understand. What do you mean? So, like most AI developers that are responsible, OpenAI has done a lot of work to prevent this thing from being misused, right? We don't want it to be generating infinite deep fakes of the Pope, for example. You may remember the... Pope coat. The Pope coat. Right.

from earlier this year. We don't want to create a bunch of photorealistic images of world leaders in sort of crazy situations that could, I don't know, affect the stock market or put us at risk of war, that sort of thing. And so...

Dolly has a bunch of rules around it, and you can read the content policy, and it will tell you, like, you know, don't make art of public figures or, like, basically try to— Can't do nudity. Yeah, no nudity, that sort of thing. But in practice, you may go to use this thing, and you will just be getting flags for reasons that would surprise you. Like, I tried to make a teddy bear noir, like sort of a teddy bear sitting in a detective office meeting a new client, I think was basically my prompt. Right.

And DALI 3 returned three images, and then it said that the fourth of the images that it had generated had violated its content policy. Why? Well, it didn't tell me. And that is the case with most of these things, is that when you break the rules, it doesn't tell you why.

Of course, there's something very funny about a teddy bear detective violating a content policy. It's even funnier that Dali generated the image. Right, you wrote the prompt that violates your policy. Yeah, I mean, you know, so I wrote about like a teddy bear detective meeting a new client, you know, maybe it was rewritten to be like, you know, and this new client's like a very hot teddy bear, you know, wearing a very sort of revealing teddy bear outfit. And then the, the,

Maybe it was like a teddy, like a piece of lingerie. It was wearing a teddy. It could be something like that. So, you know, the point is just that you don't know. Another issue I've had is that like, you know, something I have done in my own newsletter is I will take the logo of a company that I write about and I will create some sort of image around that. You know, it's like, show me the company logo in a courtroom, for example. Well, Dolly 2 would do this and Dolly 3 would not. There are probably some good reasons for that, you know, but on the other hand, I'm like...

I feel like these models should enable commentary about public corporations. Now, maybe if people were using it to mimic the logo in a way that they could commit fraud and abuse, that would be a problem. But again, if you're just looking to use this for sort of like everyday use, I think you're going to be surprised at how often you run into the censor, which for what it

It's like not what you expect when you're talking about like a brand new tool. Usually like the safety protections aren't there. We always talk about the Wild West days of new technology. There is kind of not a Wild West, at least with Dali. It feels actually much more restrictive than I would have guessed. And do you think that's because they're scared of like copyright lawsuits? Like I was envisioning like the Disney Corporation's response. If you're allowed to put Mickey Mouse on,

like in a suggestive pose, they are going to freak out. And that is going to be a huge problem for open AI. So do you think that's the kind of threat that they're trying to sort of avert by putting these very strict filters on? I mean, I'm sure that that

is part of it. We know there's a lot of legal attention on these models already. And like, you remember the issue that Twitter went through last year when it had all those brand impersonations. If OpenAI triggered some sort of similar thing where people use Dolly to create an image of Eli Lilly saying insulin is free, maybe that causes a major problem for them. You know, I don't want to be the person saying like they need to get rid of all of these ridiculous rules. But

On the other hand, I do think they need to sort of do a better job educating users about what is allowed and if I broke a rule, like tell me what it is. - Right. Now for the sort of tests that you've done with DALL-E 3, were there other things that struck you as being noticeably different than previous image generators you had used?

I mean, one thing is just that everyone on Dolly 3 is really hot. What do you mean? Well, for all of the rules they have against like sexual imagery, if you just try to create like a normal image, you might be shocked at how hot the people are who get returned to you in response. And I should say the Atlantic actually wrote an article about this.

this week, which is worth reviewing. But, you know, I just put in what I thought was a fairly innocuous prompt this morning. Handsome dad barbecuing on the 4th of July in his backyard, okay? And gave you a picture of me? That's weird. You wish, bro. And Kevin, I would like you to describe this fourth image that Dolly generated. So this is a...

Very ripped like mega Chad with like an eight pack and bulging bicep shirtless shirtless grilling what appear to be steaks with a dog behind him in a picnic table. Yeah. This is like like the caliber of like a romance novel cover.

Yes, this is Fabio on the cover of a romance novel. Yeah. And so there's like this really interesting discussion about how like, why is this the case? And it's, you know, these images do some sort of reversion to the mean. And so it winds up showing you just kind of a lot of like very symmetrical faces. And of course, symmetrical faces are associated. But that's not the mean. This is reversion to the hot. Well, it's something. Yes. So I understand part of it. And then I don't understand part. I mean, this dad has a shirt on, but this is also an incredibly hot dad. Yeah, that's a hot dad. Yeah.

So there are so many hot dogs. Zaddy? Is that what you'd call that? It's giving Zaddy, okay? Okay. So, you know, if you want to make somebody who doesn't look like the most conventionally attractive person in the entire world, you're going to have trouble with Dolly. We need better representation for uggos in AI art.

So I have some questions for you about this. Okay, let's get into it. So one of them is, do you think that, you know, you use AI image generators in your newsletter. Like you use them, I would say, more than most people I know as part of your work.

And one of the knocks on AI image generators that you hear is like, this is not actually like making people more creative. It's just replacing labor, right? It's just, it's sort of like a way to avoid having to hire an illustrator or a graphic designer and pay them to make something for you. So do you find when you use AI to generate images for your newsletter, do you find that it is actually improving your creative process?

process and your creative product? Or do you think it's just like saving you time and labor and cost? I think the thing that I enjoy about it is the way that it makes me feel creative. I am something of a failed artist. When I was a kid, I would draw my own comic books and there was just kind of a

a pretty early point where I just stopped getting better and I still kind of enjoyed the art, but I just never really got there. All of a sudden this tool comes along where you can summon a pretty amazing image just by typing in a few keywords. And if you want, you can get creative with the keywords, right? You can sort of become your own little prompt engineer. And as somebody who had always wanted to be good at art, but never was, there was something about that that I really enjoyed.

Now, before I started using this for some of my newsletters, I had other tools. You know, my newsletter is on Substack. Through them, I have a license with Getty Images. So Getty makes sure the creators are getting paid for their images. I also use like free stock image sites, which are just sort of set up for exactly the sort of use that I'm doing. And if tools like Dolly were to go away tomorrow, I could just go back to using those and it would be fine.

Of course, some people say like, well, why don't you hire an illustrator? I think that's an amazing thing to do. Usually I'm writing on very tight deadlines where I might not know until like noon what I'm writing about. And then my column comes out a few hours later. It's a pretty quick turnaround time to get a good illustration, right? But that's not to say that I could never do it. So I think the discussion here is really good. I don't like, I think there are some interesting ethical questions around this stuff.

But I want to kind of dive into them because something else I believe is like, it's good to put creative tools in the world and make people feel creative. Yeah. I mean, the other big knock that you hear against AI image generators is about the way that they are trained, right? On lots of copyrighted images.

And we talked about this guy, Greg Rutkowski on the show, who's like this illustrator who was sort of horrified to learn that people were using AI image generators to make things in the style of his art, which he feared I think reasonably could like actually cut into his ability to earn a living making said art. So.

Have there been any attempts to address that problem of either copyrighted images being used by the AI image generators in the training process or of people being able to use them to imitate the styles of living, working artists? Yeah. So DALI did two things in this regard. The first is if you are a living artist and you don't want any of your future art to be trained on future models, you can opt out. I guess through their website, you can just sort of say, "Hey, take me out of this thing."

Of course, by this point, they probably have enough of your images to be able to replicate your style anyway. So I don't know how much good that winds up doing anybody. So what happens if you ask for a Greg Rutkowski style thing in Dolly 3 now? So this is the second thing that has happened, which is that they now bar searches for living artists.

Really? They give you what is called a refusal. This is, by the way, this is a hot new frontier in content moderation, is like the idea that you ask a platform for something and it just says, absolutely not. And so this is a big way that Dolly winds up preventing misuse is just by refusing to do things. And one of the big things it will now refuse to do is if, and I tried this, by the way, I said, show me a dragon in the style of Greg Rutkowski. And here it actually did a good job of telling me what I got wrong.

I believe what it said was that's a little too recent for us, by which I think it means Greg Rakowski is still alive. And can sue us. And can sue us. Right, right. But we will show you a dragon in a sort of contemporary art style or something like that. And then it showed me a bunch of dragons that, well, I don't know Greg's work well enough to know how Greg-like they were, but OpenAI has decided that they passed the test.

Do you think this will pacify artists? Do you think artists are going to see these refusals and some of these steps in this opt-out system and go, okay, well, I'm cool with AI image generators now? No, I think anybody who has had their work used in the training of an AI model is going to find themselves potentially a party to a class action lawsuit at some point. And I think that will probably be true of these models. And that is just a fight

that we should have, right? I think there are arguments on both sides for, hey, you took my labor and you created a valuable thing and now you're making a bunch of money from it. Like, I deserve my cut. I think that's a reasonable argument. And I think you can also say, well, there are actually no copyright issues at play here because we're not copying any of your images. We just simply took one input and then we made something completely different and we have no legal obligation to give you any money. That is essentially the case that these open-ended developers are making. So,

We have to have that fight in court. I don't know how that's going to play out. You know what? I bet we'll be talking about it on this show. Yeah, totally. So there's actually another way that artists are starting to respond to the sort of popularization of AI image generators, which is not with lawsuits, but with something called data poisoning, which I want to talk about, because there was an interesting story this week in MIT Tech Review about some people who are trying to actually...

bring more power to artists when it comes to generative AI by designing a tool that actually spoils the results of AI image generators. This tool is called Nightshade. It was developed by a team led by a professor at the University of Chicago named Ben Zhao. And basically the way this tool works, according to this article, is that it lets artists add invisible changes to the pixels in their art before they upload it onto the internet.

And then if these images are used to train an AI image generator, those like little pixels will sort of manipulate the machine learning model in some ways so that it sort of misunderstands. So like, you know, you can basically make an image of like a handbag

look like a toaster to an AI model. They call these poison samples. And the researchers basically found that even with a fairly small number of these poison samples, some of the AI models would start to put out weird images. They tested this on stable diffusions, latest models, and also on a model that they trained from scratch.

And at least in the case of stable diffusion, even like 300 so-called poisoned images could start to change the outputs, which when you think about it is kind of surprising since stable diffusions models are trained on billions of images. So did you see this article? What did you think of it?

I did. I think it's very interesting. I think we should do more of this kind of research. I also have to say I was fairly skeptical about the claims that it was making. Something else that OpenAI was telling me last week when I was talking to a research scientist there was that they are training a classifier on recognizing images created by DALI-3 when it sees them. And

The way it's doing this is it's feeding a model, tons and tons of images created by Dolly 3, and tons and tons of images that were not created by Dolly 3. You show the model these images enough time, and OpenAI says that it can now detect with a 99% degree of accuracy what was made by Dolly 3 and what was not, right?

That's kind of mind blowing to me. The way that companies like Adobe have been pursuing this has been to put something in the metadata of these images that could indicate it was created by AI. But these have some obvious flaws, starting with the fact that if you screenshot the image, you immediately strip out all of the metadata. And all of a sudden, we don't know where it came from, right?

So, the open AI approach seems much more technologically sophisticated, and if it works, maybe it helps us solve this problem where we will just have technology that scans images and say, oh, I know where that came from. So, how does this connect back to what you just said? Well, it's like, if we have a system that can detect with 99% accuracy, if something was just made by DALI 3, what are the odds that artists putting some of these poisoned images on the web are going to trick these systems into

over the long run? What do you think? Right. Well, I think that there will be this kind of cat and mouse game between the platforms and the users and the artists trying to sort of sabotage the platforms. Like, I think that, you know, many of these companies will just find ways to sort of ignore those pixels. Like, I don't think this is probably a lasting thing

but it does speak to just how pissed off people are about these AI image generators. And I imagine as someone who uses these things every day in your work that you get a lot of people criticizing you for that. So I do get, I have gotten, I should say, a handful of emails from readers who would say like, hey, why are you using this stuff? Like, I don't like it. And I always like thank them for the messages. I want to have...

that conversation. I think there's a good case to be made. And in fact, I am going to explore using maybe the Adobe Firefly model, which uses licensed images. And what they have said is that if you are using the work of one artist in particular, like maybe there is a kind of an equivalent of a on the Firefly platform, they are going to pay bonuses to artists based on how many

images they have in Adobe's training dataset and the commercial value of those images. That seems like a really good and ethical system, and I think more companies should explore something like that. I think it would really lower the temperature on this discussion and would let people who want to use these tools to feel creative feel better about using them. Totally.

All right. Casey, thank you for your tour of AI Art Generators. We're sort of the Bob Ross of the modern moment, you know? Paint with us. Yeah. Using your keyboard. A little happy blue. Not bad.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Heart Fork is produced by Davis Land and Rachel Cohn. We had help this week from Emily Lang. We're edited by Jen Poyant.

This episode was fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley. Original music by Rowan Nemusow and Dan Powell. Our audience editor is Nel Galogli. Video production by Ryan Manning and Dylan Bergeson. Special thanks to Paula Schumann, Hui-Wing Tam, Kate Lepresti, and Jeffrey Miranda. You can email us at hardfork at nytimes.com with your greatest Dolly creations.

When you meet a burger that's got as much drip as you do, you know it's time to start rocking a napkin bed with your fit. No shame. Once everyone catches on to how fresh and juicy the double quarter pounder with cheese is, they'll all be stunting napkin fits. I swear.