cover of episode The Deepfake Dilemma: The Technology, Policy, and Economy

The Deepfake Dilemma: The Technology, Policy, and Economy

2024/10/11
logo of podcast a16z Podcast

a16z Podcast

AI Deep Dive AI Chapters Transcript
People
M
Martin Casado
总合伙人,专注于人工智能投资和推动行业发展。
V
Vijay Balasubramaniyan
Topics
Vijay Balasubramaniyan:深度伪造技术发展迅速,制作门槛降低,使得鉴别真假信息变得至关重要。生成对抗网络(GAN)的出现极大地提高了语音和视频克隆的质量和效率,导致深度伪造工具数量激增,制作成本接近于零。仅需几秒钟的音频样本即可生成高质量的深度伪造内容。深度伪造技术广泛应用于政治、金融和媒体等领域,造成严重后果。但与此同时,深度伪造检测技术也在不断发展,目前已达到99%的检测率和1%的误报率。深度伪造检测的成本远低于生成成本,这使得检测成为一种有效的防御手段。深度伪造系统会在频率域或时间域上犯错,从而可以被检测到。水印技术在实际应用中存在局限性,难以阻止恶意攻击者。政策制定者应制定政策,使恶意行为者难以实施深度伪造,同时为内容创作者提供灵活的空间。平台应承担责任,明确区分真实和虚假内容。 Martin Casado:垃圾邮件和深度伪造之所以有效,是因为其边际成本极低。深度伪造技术始于2018年,现已广泛应用。尽管深度伪造的制作门槛降低,但检测技术也在不断改进。深度伪造技术对政治、金融和媒体领域造成严重影响,例如选举干预和金融诈骗。政策制定者对AI技术感到担忧,并希望对其进行监管,但缺乏明确的监管方向。政策应明确规定哪些技术可以检测深度伪造,并对违规行为进行处罚。

Deep Dive

Chapters
The discussion begins with the rapid proliferation of voice cloning tools and the challenges of identifying real content. The conversation then delves into the history of voice fraud and how deep fakes differ from traditional voice manipulation techniques.
  • 120 tools for voice cloning at the end of last year, now 350.
  • Deep fakes use generative adversarial networks to mimic voices and faces.
  • Traditional voice fraud has been around for a long time, but deep fakes offer new levels of sophistication and scale.

Shownotes Transcript

Translations:
中文

At the end of last year, there were hundred and twenty tools with which you can clone someone's voice. And by much of this year, it's become three hundred and fifty. Being able to identify what is real is going to become really important, especially because now you can do all of these things. It's ale.

What are the reasons that spam works and deep fakes work? Is the marginal cost of the next call is so low that you can do these things that .

it's where I per to detect deep fix. We had ten thousand years of evolution. The way we previous speech has vocal cards, has the die from, has your lips and your mouth and your nasal cavity is really hard for these systems to .

replicate all of that deep fake, a 4 matto of deep learning and fake that started making its way into the public consciousness in twenty eighteen, but is now fully in as I gust.

We are seeing an an alarming rise of deep fake. Deep fakes are becoming increasingly easy .

to the fact videos aren't everything.

Deep fake robot color with someone using president biden's voice. The fake A D.

we've been defects, access the media era, sports and of course, politics. And at the rate that they are appearing deep fakes might sound like an impossible problem to tackle. But IT turns out that despite the decreasing barrier to creation, our defender toll test is even more robust.

So in today's episode, we'll discuss that with someone who's been thinking about voice security for much longer than the average twitter user, or even higher in politician. Wondering with this all coats today. V, J. Balo, super money and cofounder, and C, E, O, can drop joins a sixteen cy general partner, marine casado, to break down the technology, the policy and the economy of deep fix.

Together, they'll discuss questions like just how easy is that to create a deep fix today? Like how many seconds of audio ts do you need and how many tools are available? But also, can we detect these things? And if so, is the cost realistic? Plus, what does good regulation look like here in a space will be so quickly? And have we lost a grip on the truth listening to find out? But first, let's kick things off without vj got here. As a reminder, the content here is for informational purposes only, should not be taken as legal business tax or that an advice or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any a six sense y fund. Please note that asic scenes y and ezoe illit may also maintain investments in the company's discussed in this park cast for more details, including a link to our investments, please see a extinct outcome slack disco lotus.

Been playing in the voice space for a really long time. I'm going to date myself, but I start working at Simons and at the mans. We were working in landline switches and E W S D switches and things like that.

And so that's where I started. I also worked at google, and there I was working on the scalability algarve s for video chat. And so that's where I got introduced to a lot of the voice over I P side of things.

And then I came to do my P H. D. From George attack. And so there I naturally got super interested in voice security.

And uh, ultimately pin drop, which is the company that I started, was my P. H. D.

theis. Very similar to the way you started off your life as well. But IT turned out to be something pretty meaningful. And ever since then, it's been incredible. What's happened in the space this way.

I'm so excited to have you on this podcast to many deep fakes. Are this new thing? But you've actually been in the voice for hot detection space for a very long time.

Is going to be great to see your prospect about how things are different now and how things are more of the same. And so maybe to provide a bit of context to get started from deep fakes, they've entered the sight guide. Maybe talk through what they are when we say deep fakes and why I were talking .

so much about them. We've been doing deep fake detection for, like now seven years. And even before that, you have people manipur audio and manipulating video.

And you saw that with Nancy, I pillows, I slowing in a speech. All they did was slow down the audio. IT wasn't a deep faker, was actually a cheap fake, right? And so that is actually what's existed for a really long time.

What changed is the ability to use what are known is generated adversary networks to constantly improve things like voice cloning or video cloning, or essentially try to get the lights tss of a person really close. So essentially two systems competing against each other. And the objective function is, i'm gonna get really close to Martin's voice and Martin's face.

And then the other system is trying to figure out, okay, what are the analysis? Can I still detect that it's a machine as opposed to a human? So it's almost like a reverse during test. And so what ended up happening is once you start creating these gangs, which are used a lot of these spaces, when you run them across multiple iterations, the system becomes really, really good, because you train a deep learning neural network, and that's where the deep fake comes from. And they became so good that lots of people have extreme difficulty differentiating between what is human and what is machine.

So let's break this down a little bit because I think that deep fakes are more talk about now than they were in the past, right? And so clearly, this seems to have coincided with the generate A I wave. And so do you think it's fair to say that there is a new type of deep fake that is drafted on the general way I wave, and therefore we need to have a posture? Or is this just the same, but brought to people's attention because of general vi?

Generate A I has allowed for combinations of wonderful things. But when we started, there was just one tool that could clone your voice, right? IT was called live bird.

Incredible tool was used for lots of great applications. At the end of last year, there were hundred and twenty tools with which you can clone someone's voice. And by much of this year, it's become three hundred and fifty.

And there's lot of open source tools that you can use to essentially mimic someone's voice or to mimic someone's lightness. And that's the ease with which this has happened. Essentially, the cost of doing this has become close to zero because all the requires, for me, the colonial voice, Martin, now requires about three to five seconds of your audio.

And if I want a really high quality, deep fake IT requires about fifteen seconds of audio. Compared this to before the generative A I boom, where john legend wanted to become the voice of google home, and he spent like close to twenty hours recording him, saying a whole bunch of things so that google home could say in some Frances code, the weather is thirty seven games or whatever. So the fact is that he had to go into a studios, spend twenty orders recording his voice in order for you to do that, compared to fifteen seconds and three hundred different tools available to do IT.

And almost feels to me that we need like new terms because this idea of cloning voices has been around for a while. I don't know if you remember this vj, this wasn't too logo when I was in japan. And I got this call for my parents, which I never do. And my mom's like, where are you right now? And i'm like, i'm in japan and my mom like, no, you're not.

I I guess I am SHE says, hold on, let me get your father so my dad dubbs on the line and he's like, where are you? I'm in japan is like, I just talk to you, you were in prison and i'm leaving to go bring ten thousand dollars of bail money to you like, what are you talking about and he's like, listen, someone called and said that you had a car accident and you're a bit muscled because you were hurt and that I needed to bring cash to a certain and like your mom just thought to call you allows heading out the door, right? So of course, we call on the police after this and they said, this is a well known scan that's been going on for a very long time and it's probably just someone that try to sound like you and muffling their voice, right? And so IT seems that calling somebody and a fucking the voice to trick people has been around for a very long time.

So maybe just from your perspective, do we need a new term for these generated ai fakes because they're somehow fundamentally different? Or is this just more of the same? And we shouldn't really worry too much about IT because we've been dealing with that for a long time.

yeah. So it's interesting. IT happened due in japan, man, because the origin of that camp early on, I went with the and recent horror contingency to japan. This was way back. This was like close to eight, nine years back when I was talking about voice fraud, the japanese audience talk to me about audio, I, saudi, which has helped me. Grandma, so it's exactly that.

At that point in time, IT had started costing japan close to half a billion dollars in people losing their life savings to the camp, right? So in japan, half a billion dollars close to eight, nine years back. So the mode of Operation is not different, right? Get vulnerable populations, right? To get into an urgent situation, believe they have to do IT othe wise, its disastrous, and they will comply.

What's changed is the scale and the ability to actually mimic your voice. The fact is that now you have so many tools that anyone can do IT super easily too. Before if you had some sort of an accent in things like that, they couldn't quite mimic your real voice.

But now, because it's fifteen seconds, your grandson could have a fifteen second tiktok video. And that's all slipped, not even fifteen seconds with five seconds. And if, depending on the demographic, you can get a pretty good clone.

So what's changed is the ability to scale this. And then these fraudsters are combining these text to speed systems with eleven models. So now you have a system that you're saying, okay, when the person says something, respond back in a particular way, trusted by the alem.

And here is the crazy thing, right in alem hallucination is a problem. So the fact that you're making set up is a bad idea, yeah, but if you have to make sheet up to convince someone was great and perfect, you and it's crazy. We see forward where the alem is coming up with crazy ways to convince you that something bad is happening.

Wow, wow. I want to get into next. Are we are all doing as a possible detect these things like that. But before we do that, a big greatest since you probably are the world's expert on voice fraud, you ve probably seen more types of voice fraud than any single person on the planet. We know if the audio dc sagi, which is basically what I got hit with, can you baby talk to some other uses of deep fakes that are prevalent today?

Yeah, so you know, deep fix existed. But if you think about deep fakes affecting and deep fix right now, you can see right in the political spectrum there there, right? So election misinformation with president by dance campaign happened. We were the ones who tt IT and identify IT and things like that.

What was the specifics? A lot to take?

Yeah, no, no of for sure. What happened is early on this year. And if you think about deep fixed, the effect three big as commerce, media and communication, right? And so this is news media, social media.

So what happened is, at the beginning of an election year, you have the first case of election interference with everyone. During the republican primary in news, hampshire got a phone call that said, hey, you know what? Your vote doesn't count this tuesday.

Don't vote right now. Come vote in november. And this was made in the voice of the president of the free world, right? President by IT, right? That's the craziness.

They went for the highest profile target, and you should listen to the audio is incredible. IT is like president biden, and they've interposed ted with things that president biden says, like, what a bunch of malky and things like that. So that came out.

And people like, okay, is this really president vite? And so not only did we come in and say this was a deep fake, we have something called so tracing, which tells us which AI application was used to create this deep fake. So we identified the deep fake.

And then we worked with that A I application. There are an incredible company. We worked with them, and they immediately found the person who use that script and shut them down so they can create any other problem.

So this is a great example of different good companies coming together to shut down a problem. And so we worked with them. They shut them down.

And then later on, regulation kicked in and they find the telecom providers who distributed these calls, they find the political analyst who intentionally created these deep fakes. But that was the first case of political misinformation. You see this a lot with this year. Yeah, I was this year. I was in january of this year.

That's amazing. Okay, we got a politics. We ve got building old people. Yes, maybe one more going to anc top before we get into l whether we can detect these things.

the one thing that a really close home is in comma, right? Like you financial institutions. Even though general way I came out in twenty twenty two, in twenty twenty three, we were seeing essentially one deep fake a month in some customer, right? So at which is one deep fake a month in some customer would face IT IT wasn't a widespread problem yeah.

But this year, we've now seen one deep fake per customer poor day. So IT is rapidly exploded. And we have certain customers like really big banks who are getting a deep fake every three hours, like it's insane, the speed.

So there has been a fourteen hundred percent increase in the amount of deep fakes we've seen this year in the first six months compared to all of last year. And the year is not even over. Wow.

all right. So we have these defects. They are super prevalent. They are impacting politics and e commerce. Can you talk to like whether these things are detectable at all? This is the beginning of the end or where are we.

Martin, you've lived through many such cycles where initially IT feels like the sky is falling. Online fraud, mails spam, there's a whole bunch of them. But the situation is the same.

They're completely detectable right now. We're detecting them with ninety nine percent detection rate with a one percent false positive rate. So it's extremely high accuracy on being able to detect them .

just to put this encounters what numbers from identifying voice, not fraud t just like whether it's my voice.

So it's roughly about one in every hundred thousand to one in every million, right? That's the issue. So it's much higher precision for short and much higher specificity. But yeah, deep fix you're detecting with ninety nine percent accuracy. And so these things you're able to detect very, very comfortably.

And the reason you're able to detect IT is because when you think about even something like voice, you have eight thousand samples of your voice every single second, even in the lowest fidelity channel, which is the contact center. And so you can actually see how the voice changes over time, eight thousand times a second. And what we find is these deep fake systems, either on the frequency domain, suspect ally or on the time domain, make mistakes.

And they make a lot of mistakes. And the reason they make mistakes and still is very cleared is because think about IT, your human year can't look at a Normal is eight thousand times a second. If you did, you'd go back.

I hate like you would have some serious problems. So that's the reason like it's beautiful to your year. You think it's Martin be on the other end, but that's where you can use good A I, which can actually look at things eight thousand times a second.

Or like when we're doing most online conferencing like this podcast, it's usually sixteen thousand. So then you have sixteen thousand samples of your voice. And if you're doing music, you have forty four thousand samples of the musicians voice every single second. So there's so much data and so many anomalies that you can actually detect these pretty comfortable.

Ly, I see a lot of proposals, particularly from policy circles of using things like water marking or trip tower phy, which is always seem strong idea to take asking criminals to comply by something. So I don't know like how do you view more active measures to like self identify either legit .

or a legitimate traffic? This is why you're in security, Martin. And almost immediately you realize that most attackers will not comply to you putting in a water mark.

But even without putting in a water mark, right? Like even if you didn't have an active adversity like the president biden robot call that I referenced before, when IT finally showed up, the system that actually generated IT had a watermark in IT. But when they tested IT against that watermark, they only were able to extract two percent.

interesting. So you mean the original biden .

call had a water? He was generated by an A I APP that included a .

water month they can.

And that water mark went away largely because when you take that audio played across a play, the cost telephony channels, they're bits and bites, they get stripped away. And so once they get stripped away in audio s of a sports channel, so even if you added over and over again, it's not possible to do IT. So these watermark techniques, i'm in there a great technique.

You always think about defense in depth, where their present, you'll be able to identify a whole lot more genuine stuff as a result of these water Marks. But attack ackers are not going to comply IT. When you get videos like we are now working with news media organizations and ninety percent of the videos in audience they get from, for example, the israel hamas war. Yeah, a fake.

How many ninety percent of them are fake? But yeah, I guess I shall be so surprised. They all made up that are different war.

Some of them are cheap fake. Some of them are actually deep fake. Some of them are ched together. And so being able to identify what is real is going to become really important, especially because now you can do all of these things at scale.

Can you draw how the mediation in A I technology impacts this? Because clearly, something happened in the last year to make this economic for attackers with things arise. And clearly, it's gonna a keep getting Better. And so do you have a mental model for why this doesn't become a serious problem in the future? Or does IT become a serious problem in the future?

So one of the things that we talk about is any deep fake system should have strong resilience built in IT, so should not just be good about detecting deep fakes. Right now, you should be able to detect what we call zero day deep fakes. New system gets created.

How do you detect that deep fake? And essentially, the mental model is the following one, be fake architectures are not simple monolog systems. They have like several components within them. And what end up happening is each of these components tend to leave behind artifacts, we call this of fake print.

So they all leave behind things that they do poorly, right? And so when you actually create a new system, you often find they pulled together pieces of other systems, and those leave behind their older fake prints. And so you can actually detect newer systems because they usually only improvise on one component.

The second is, we actually run games. So you get these gangs to compete like we create our own deep fake detection system. Now we say, how do you beat that? And we have multiple iterations of them running, and we're constantly running them.

I just want to, I understand here, so you're creating your own deep fake system using the approach you touched about before, which is the general advert networks, and you can create a good deep fake, and then you can create a detection for that.

Is that right? exactly? And then you beat that detection system and you run that I ation iteration, iteration.

And then what you find is actually something really interesting, which is if a deep fake system has to serve two masters, that is, one, I need to make the speech legible and sound as much like Martine. And too, I need to deceive a deef fake detection system. Those two objective function starts in neighings. So for example.

I could .

start adding noise. And noise is a great way to avoid you from understanding my limitations. But if I start adding too much noise, I can't hear IT.

So for example, we were called into one of these deep fakes where lebron James apparently was saying bad things about the coach during the paris olympics. IT wasn't lebron James. IT was a deep fake.

We actually provided his management team the necessary detail so that in x IT could be labeled as A I generated content. And so we did that. But if you look at the audio, there was a lot of noise introduced into IT, right, to try and avoid detection, but lots of people can even hear the audio.

They were like this, really. And so that's where you start seeing these systems diverge. And this is where I have confidence in our ability to detect IT, right, which is you run these gas, you know, these architectures, that these deep fake generation systems are created, and ultimately, you start seeing divergences in one of the objective functions. So either you, as a human, will be able to detect some things off, are we as a system will be able to .

detect somethings off? awesome. One of the reasons that spam works and deep fixed work is the marginal cost of the next call is so low that you can do these things.

And right, like the cost of the next span, me mail or whatever, do you have even just the most vae sense of if IT takes me a dollar to generate and deep fakes, how much does IT cost to detect and deep fix that one? The one is at ten. The one is at one hundred. One.

it's which you put to detect deep fix, right? Because if you think about IT, like what we've seen is the closed example is apple released its model that could run on device and even that model, which is a small model in order to do lots of things like boys to text and things like that. Our model is one hundred times smaller than that. So it's so much faster in detecting deep fix. So the issue is about one hundred right now, and we're constantly figuring out ways to make IT even cheaper, but it's one hundred, that of generation wow.

I see. So to detected is two orders of manitowoc er than creation, which means in order for anybody to economically get there is no defense no defense. But if there is a defense that requires the bad guys to have two orders are and into more resources, which yeah it's actually pretty dramatic, give IT. Normally you go for parity on these things because there tends to be a lot more good people than bad people.

And that's the thing. You have two orders of magnitude de. And then the fact is that once you know what a deep fake looks like, unless they rearchitects the entire system, and the only companies that rearchitects full pipelines. And the last time this was done is back in two thousand fifteen, when google released cron, where they rearchitects several pieces of the pipeline, is a very expensive for opposition. Is the intuitive .

reason that the cost is so much cheaper to detect as as you'd have to do less stuff like the person generate, the deep fake has like sound like a humid be possible to human invite this. And so that's just more things than detecting IT, which just can be a much more narrow focus. It'll always be cheaper to detect. And then you don't see a period in time where the A I is so good, no deep fake mechanism can detect and you don't 起来。

We don't see that because either you become so girded avoiding detection that you actually start becoming worse at producing human generated speech.

Are you producing human generated speech? And unless you actually create a physical representation of a human, because we've had ten thousand years of evolution, the way we preview speech has vocal cards, has the die from, has your lips and your in your nested cavity, all of that physical attribute. Think about the fact that your voices resonating through full of your vocal card.

And these are suttles things that have changed over time. It's all of what has taken you to become you. And somebody might have punch, tune in the throat at some point in time that's created some kind of thing is, is so much thing that happens.

IT is really hard for these systems to replicate all of that. They have generic models, and those generic models are good. You can also think about the more we learn about your voice, Martine, the Better we can get at knowing where your voice is deviating. 是 i have an incentive.

is a good guide to work with you on that, right? Like you'll have access to work, the bad people that access to data and totally makes sense. Yes, IT seems to me like the spam lessons learned apply here, which is spam can be very effective for attackers. Very effective defenses can also be incredibly effective, however you have to put in place. And so the same situation here, which is be sure you have a strategy for deep take detection, but if you do.

you'll be OK. That's exactly right. And I think IT has to be in each of the areas, right? Like when you think about deep fix, you have incredible AI applications that are doing wonderful things in each of these places know the voice cloning apps. We've actually given voices to people who have thought cancer and things like that, not just thought cancer, people who have been put behind bars because of bad political regime are now getting to spread their message. So they're doing some incredible stuff that you could do otherwise. But in each of those situations, IT was with the consent of the user yeah who wanted their voice recreated, right? And so that notion that the source AI applications need to make sure that the people using their platform actually are the people who wants to use their platform that Sparked a and this .

is where the partnerships that you talked about with the actual generation companies comes in so that that you can help them from the legitimate use cases as well as knifing out the legended one.

Is that right? Absolutely eleven lads credible. The amount of what they're doing to create voices ethically and safely and carefully is incredible.

They're trying to get lots of great tools out there were partnering with them. They're making their data sets accessible to us. There are companies like that, right? Another company called the speech that did a lot of the hollywood movies. So all of these companies are starting to partner in order to be able to do this in the right way. And is similar to a lot of what happened in the fraud situation back in the two thousands or the emails panic back in the two thousands.

I want to shift over to policy. I ve had a lot of policy discussions lately in california as well as at the federal level. And here's my summary of how our existing policymakers think about ai.

A, they are scared and they want to regulate IT. b. They don't know why they're scared and see with one exception, which is none of them want to deep face of themselves.

like I found, like a primary .

motivation around regulating A I is just this fear political depicts, honestly, and these are in pretty legit face to face conversations. And so have you given thought to what guidance you would give to policymakers, many of who listen to this podcast, and how they should think about any regulations or rules around this, and maybe how to intersects with things like innovation and free speech, is that I M is a complicated topic.

I think the simple one liner answer is they should make IT really difficult for three actors and really flexible for creators, right at the ultimate difference. And history is rife with a lot of great ways, right? Like you live through the email days where the cans react was a great way. But he came in combination with Better amid technologies.

some of the generation too. But maybe just walk through how can spend works. I think it's a good analogue.

You probably know more about the cans act, but the can spam act is one where anyone who is providing unsolicited marketing has to be clear on its headers, has to allow you to opt out all of those things. And if you don't follow this very strict set of policies, you can be fined.

And you also have great detection technologies that allow you to detect these stamps right now that you follow a particular standard, especially when you're doing unsolicited marketing or you're trying to do bad things like pornography, you have detection. A M technology said that can detect you. Well, the same thing happened when banks went online.

You had a lot of online fraud. And if you remember, the know your customer act and the and time launder ing acts clain there. So the owners was you as a organization has to know your customer.

That's the guarantee. And so you need technology. After that, you can do what you want. What was really good about both of those cases is they got really specific on one, what in the technology detect. Because if the technology can detect IT, you can't litigate, you can't find the people who are misusing IT and so on.

So what can the technology detect? And too, how do I make IT really specific on what you can and cannot do in order to be able to do this? And so I think those two were great examples of how we should think about litigation and a deep fact that is this very clear thing, right?

Like you have free speech, but for the longest time, any time you used free speech for fraud or you are trying to insight violence, are you are trying to do, observe things. These are cleared places where the free speech guarantees go away. So I think if you're doing that, you should be fined and you should have laws that protect you against that. And that's the model I like think of awesome.

So i'm the add just one thing from can spend that I think that you've touched on, but I was actually working the email security there. So I think that this this highlighted, I want to see if that you agree with this kind of character sation. So the first one is for illegal use policy doesn't really help because people are not going to comply and they are going to do whatever they want and they're doing something criminal anyways.

And so for that, we just rely on the most technical solution. You can make recommendations, but for strictly illegal users, you have to rely on technology. No policy is gone to keep you safe.

But then there's this kind of grey area of unwanted stuff, right? And the unwanted stuff you didn't ask for IT IT made up be illegal, but it's super annoying. And it's I wanted and I can feel your inbox.

And for those, you can put in rules because of somebody crosses those rules, you can litigate them or you can opt out, amit. And so IT regulate to unwanted. I could see that definitely happening here. And of course, there is the wanted stuff, which doesn't require any regulation. Is that a fair character zone?

That's a really good categorization. I think you've said IT really, really well. The only other thing that i'll say is right now, because we consume things through a lot of platforms, platforms should be held accountable at some level to clearly demarcating what is real and what is not right because otherwise, it's going to be really hard for the average consumer to know that this is A I generated buses. This is not. So I think there's S A certain amount of accountability there because .

the technology is where IT is, putting the honest is on the platforms to do best practices, just like we did for spam, right? Like I rely on microsoft and google for the spam detection. Doing the same type of thing for the plant sounds like a very sensible recommendation.

Alright, great. So let's just go ahead and and wrap this up. So key point number one is deep fakes have been around for a long time. We probably need a new name for this new generation.

And this isn't just like some hypothetical thing, but you're sing a massive increase, you said as much as one per day and the cost to generate has gone way down. Good news is that these things are evidently detectable and in your opinion, will always be detectable if you have a solution in place. And then as a result, I think any policy should provide the guidance and maybe accountability for the platforms to detect IT because we can actually detect IT.

And so listen to something for people to know about, but it's not the end of the world and policy. Y makers don't have to regulate all of A I for this one specific use case. This is a fair.

and this is a beautiful syn opposite monkey. You've capture you too well.

All right, that is all for today. If you did make IT as far, first of all, thank you. You put a lot of thought into each of these episodes, whether it's guess the calendar tetchy cycles with our amazing editor, Tommy, until the music is just right. So if you'd like what we've put together, consider dropping as a line at break this podcast alcove splash asic season and let us know what your favor episode is. I don't make my day, and i'm sure Thomas to will catch you on the flip side.