Home
cover of episode #114 - ChatGPT applications, Claude, PALM-E, OpenAI criticism, AI-generated spam

#114 - ChatGPT applications, Claude, PALM-E, OpenAI criticism, AI-generated spam

2023/3/10
logo of podcast Last Week in AI

Last Week in AI

Chapters

The episode discusses the integration of AI, particularly ChatGPT, into various business applications and media platforms, highlighting the rapid adoption and potential impact on user interactions and content creation.

Shownotes Transcript

Hello and welcome to SkyNet Today's Last Week in AI podcast, where you can hear us chat about what's going on with AI. As usual, in this episode, we'll provide summaries and discussion about some of last week's most interesting AI news. You can also check out our Last Week in AI newsletter at lastweekin.ai for articles we did and did not cover in this episode. I am one of your hosts, Andrey Kurenkov.

And I'm the other one of your hosts, Jeremy Harris and Andre. This is, I think, one of the first weeks that we've had in the last month or so, where it's not all chat GPT all the time. There's some chat GPT, but it's not all chat GPT all the time. Yeah, we have a bit of variety now. We'll still, maybe it's, I think, yeah, a month ago it was like free force chat GPT. Now it's maybe like...

Maybe half. Yeah, sharing the work a little bit here. A little bit, a little bit. Things are calming down. So yeah, finally a bit of variety. But ironically, to start with in our applications and business section,

I just decided to make a LLM or chat GPT roundup of like all, there's so many of these little stories. Get it all out of the way. Just like combine all of it. So some of these stories we got radio GPT, which is like, I guess trying to make a radio host with chat GPT. Well, it's not chat GPT, it's GPT free. So weird naming, but,

Then there's a story on Salesforce adding ChatGPT to Slack, potentially, and some other things they got. Microsoft is still integrating ChatGPT to now its developer tools to make it easier to develop applications. Snapchat has launched an AI chatbot powered by GPT technology. It's called MyAI, and

I'm not sure. I think it's like a chat bot where you can just talk to it and it can answer trivia questions. And it's supposed to have like a personality that values friendship, learning, and fun.

So yeah, there's just a lot of things. Oh, there's a couple more. We have Duck Assist, which is Duck, Duck, Go, a search engine. Not ShareGPT. It's more of an older, less fancy AI that's just answering some questions.

And last up, we have shortwave email app that introduces AI-powered summaries. Again, not quite as fancy as ChatGPT. It's just if you have a long email, you can summarize it.

So, yeah, a lot of these announcements, a lot of various places where people are integrating AI, and it just seems like ChatGPT made everyone realize, you know, oh, let's add some AI-powered features because that might be neat. Yeah, and in a real instance of what sometimes people refer to as technology overhang, where like, you know, the basic tech, we've talked about this before, but the basic tech behind ChatGPT

kind of, arguably, was somewhat available for the last three years. And it just took a lot of iteration, a lot of fairly small tweaks to the underlying technology to actually make it this explosive. And so there's a sense in which our AI tools actually have capabilities that we're not seeing just because

No one's gotten the right window on them yet. We haven't figured out quite like the right way to frame an interaction with these tools to kind of make them as explosively successful. And, you know, maybe this is what will finally change all that. We're seeing, you know, all these applications, all these companies now experimenting with it. One thing that comes to mind too, you know, looking at the Snapchat thing,

I'm old enough to remember when Facebook was new and it was all about, you'd be perusing, it'd be the content that your friends post, all about your friends. It was the social thing. And then over time with News Feed, gradually it became more and more kind of algorithmic, less and less of your friends' content. And now my Facebook feed, which I'm a little embarrassed to say I still have, my Facebook feed is nothing but these highly entertaining ads. It's kind of what it feels like.

And so, you know, it kind of makes you wonder how in Snapchat, are we going to see something like that? Where right now, Snapchat's mostly about human to human interaction. But as these chatbots get better, maybe more and more of the value that slippery slope kicks in, you start to engage more and more with the AI side of it. And eventually that becomes the product. I don't know, but it's kind of interesting to think about.

Yeah, it's interesting. I think you could say the same about Instagram, where it used to be mostly friends and now it's creators and it's ads, actually very well-targeted ads in my experience. And it'll be interesting. I think we have discussed how...

this AI seems kind of tricky to monetize. It's not cheap to run. And it's like, what, are you going to have it talk to you, but then throw in an ad every once in a while? That seems like it'll be weird. So yeah, it's interesting to see. And I think what this is showing, all these new stories of that will have a lot of these smaller kind of

not as exciting things. I mean, we got like out of complete in email writing and Gmail and Outlook last year or maybe two years ago. We'll see just a lot of these

kinds of things being added to all sorts of software, it looks like. And it's so hard to know what the killer features are going to turn out to be. Because OpenAI, quite famously, they launched ChatGPT as just another side project. They were just as shocked as anyone when it took over the internet. And so maybe one of these things will be the next big thing. And maybe it is email writing automation or something like that. But we'll just have to wait and see in the next couple of months, I guess.

Yeah, yeah, we'll have to see. And then I guess moving on, something that's quite relevant to that thought, our next story is Inside Cloud, the chat GPT competitor that just raised over $1 billion. And we also have a story that Anthropic begins supplying its text-generating AI models to select startups.

So yeah, this is broadly speaking, this company Anthropic, which was started by many people from OpenAI that is pretty focused on AI safety research, but has also developed this ChatGPT-esque cloud

language model that seems to be quite good. There's been less kind of results on it or things I've seen, but it appears to be maybe on par. So yeah, this maybe is the main competitor right now to ChatGPT. What do you make of it, Jeremy?

I think there are a lot of interesting things about this. So for context, you know, if you're following the space of AI companies that are playing in this area, so you've got OpenAI and their motto seems to be move fast and break things, you know, publish like your models, test them out quickly.

Anthropic was founded by a team, a lot of the members of OpenAI's AI safety team and their AI policy team that left over concerns partly about OpenAI's monetization and published by, not by default, but publisher AI model strategy. They felt that was a little bit risky. And so we see that concern about safety echoed in the strategy that Anthropic is using to build these models.

the real big innovation here, the thing that makes Claude different from ChatGPT is this new AI alignment strategy that they're using called constitutional AI. And it's actually kind of interesting. So roughly how it works is you have your AI model that's originally kind of like GPT-3. It'll write stuff that could be hateful. It'll write stuff that could be very damaging or whatever. So you get it to generate a piece of output.

And so maybe you're like, hey, GPT-3 or hey, whatever model, tell me how to make a bomb. And then it'll very helpfully tell you how to make a bomb.

Then Anthropic has potentially a separate model with access to what's called a constitution, a set of rules that it's going to read. Then you're going to prompt that second model to say, "Hey, read these rules, read this constitution, and use them to critique the output that the first model put out." If the first model actually said, "Oh, yeah, no problem. Here's how to build a bomb,"

The second model looks at the constitution and says, "Well, wait a minute. This violates a constitutional principle around only generating safe outputs," for example. Then that second model is going to generate a corrected version of the response. In this case, it might be like, "Sorry, I can't help you. Making bombs is dangerous."

And then you retrain the first model on that improved output, that corrected output from the second one. And so this is an all kind of AI loop. There's no human at any stage of this process, which is really exciting because this is scalable. And it does lead, it seems to lead to a more, let's say, a less controversial model, one that doesn't generate the kinds of outputs that perhaps we've seen, you know, chat GPD come out with, at least in the early days. And so sort of interesting from that standpoint.

Yeah, this is quite interesting, I think, because you can compare to ChatGPT where there they have this reinforcement learning from human feedback to sort of try and correct what the model gets wrong.

And this is in a way similar, right, where you're getting feedback, but this time it's feedback from another model that can be trained kind of with human oversight, right? These lists of principles and constitutional AI come from humans and that is trained on supervised learning and reinforcement learning.

So this is, yeah, this is interesting, I think, because so far, a lot of this, like for the last few years, a lot of the issues with large language models have been that they have trained in this paradigm of self-supervision. So they just train on a whole bunch of text and they're good at predicting the likelihood of text continuations out of complete text.

But they have no notion of what's good, what's bad. They're just statistics, right? They're just outputting probabilities. And now it looks like to really commercialize, to use them, to make them more reliable, you do need this reinforcement learning phase of having trial and error and being able to basically explore and get things wrong and get things right in a training phase instead of in the deployment phase.

And yeah, I think this year there was a fun meme where someone posted about how this year in AI, there will be like a thousand papers on reinforcement learning from human feedback, which I think is very true, very likely to happen.

So yeah, I think it's kind of exciting that just these models becoming this popular and this big will spur a lot of research into AI safety. And I think we'll have a lot of progress on how to get alignment with these sorts of techniques.

Yeah, and as the, I feel like my role on the show is to be the kind of like AI safety freakout guy. But I got to say, as the AI safety freakout guy here, this is some of the best alignment news that I've seen, I think, you know, in the last year at least.

where we're actually like, this is a scalable strategy. It's got a bunch of problems and it only solves for this one narrow problem, of course, of like, how do you make sure that your system is being trained to optimize for a metric that is aligned with human values?

There is a deeper version of the alignment problem called the inner alignment problem that actually doesn't get addressed by this, but this is a damn good start. This is much better than nothing for sure. And as you say, it's just exciting to see some of the economic incentives starting to push towards at least a little bit more focus on the safety side, maybe preventing companies from launching these like

unsafety padded models, which can otherwise give all kinds of dangerous outputs in that. So yeah, very cool, very exciting and nice to see Anthropic doing the good work on that front.

Yeah, Anthropic, I'm a big fan of their research. There's been a lot of cool insights. And another nice thing is they published a paper on this. So this was in December of last year. It's on archive. Anyone can find it. It's called Constitutional AI Harmlessness from AI Feedback.

and actually compares to RL from human feedback to this idea they have of RL from AI feedback, where RL is reinforcement learning. So yeah, it's cool that they're publishing, and I could even see...

If they have this model, this constitutional AI model that can tell you if something is good or bad, you probably don't want to publish the weights of your language model because that's very expensive to train and that's a competitive advantage. But you could publish the model that provides feedback so that other groups could fine-tune their things. Very cool.

Okay, so that is really interesting. Something I actually hadn't thought about. Yeah, yeah. Because people are often, so there's often this discourse in like AI safety about can you split off capabilities from safety? Because people want to, they want to be able to do research on safety without also accelerating the capabilities that they think are so dangerous. And yeah, that's sort of an interesting way to split the problem up. I wonder, yeah, I wonder if people are working on that. Yeah. Yeah.

moving on to more chat GPT and we're almost done but some more we got our lighting round first up

OpenAI announces an API for Chess GPT and it's Whisper Speech-to-Text Tect. So, you know, it's been a while since Chess GPT has been out, but now there's an API for developers to use it using this GPT 3.5 Turbo.

And, you know, it seemed like nothing too exciting. We knew there was an API coming, but there was a small sort of, I don't know, freak out or a lot of excitement about this announcement because of the pricing. The pricing is quite cheap. It's $0.002 per 1,000 tokens. So that's, I don't know, 1,000 tokens is maybe a page or two.

That's very cheap compared to 10 times cheaper than existing models. There was a good tweet that basically said that you could process the entire Harry Potter book series, the mainline books, for like $4.

So it's not that expensive to process a lot of text. And that means that we will see a ton of applications. It's very kind of approachable for even developers with not much money to develop prototypes and try out different things.

Yeah, it also raises a question about the extent to which generative AI is being commoditized. Because as you see these prices crashing, right? Like one thing you can, one way that you can think about this is, well, the total market, the total addressable market for generative AI has just dropped.

And it's just dropped potentially, depending on how you do your math, by like a factor of 10. Now, in practice, this also means other people will be able to use it. But this is an aspect of that question, that core question of how much profit is there in this business and for whom? Like, are model developers going to be able to sustain high margin profits

with these sorts of developments. Now I can imagine Cohere, I can imagine AI21 Labs, I can imagine all these other competitors to ChatGPT having to look at their pricing again and go like, wow, okay, we got to find a way to cut prices by a factor of 10 or we're out of this race. And anyway, interesting what this does to the ecosystem. And it doesn't seem like we're getting any answers just yet, but certainly an indication that price drops are going to be a recurring feature of this whole story.

Yeah, it's kind of impressive. We've seen this over and over in tech where you have a new technology and then the prices keep going down with economies of scale and new developments. Here, it's almost like, you know, that was a very quick process of new tech to it becoming cheap. Although, you know, the GPT-3 API has been out for a little bit and was more expensive. So I think this is

There has been work on this and I'm sure having the Microsoft Azure Cloud backing makes it possible to make this so efficient. Yeah. Also interesting, I guess another couple of interesting notes in the article too, one of which was that OpenAI is now allowing people to run their own instance of ChatGPT on OpenAI servers.

So the kind of interesting, you know, if you want to customize, if you're seeing a lot of volume, if you're a company and you're seeing a lot of volume for your chat GPT chatbot, now you don't have to like share your instance of the chatbot with other people. And so you can have kind of a reserved instance of it. And that allows you to maybe get more reliable service. And so all kinds of like little optimizations behind the scenes happening at OpenAI and not necessarily being super,

Super clear about where those cost savings are coming from. I think they said by looking at their whole pipeline or something across the board, they got to 10x. So we don't actually know where those efficiency gains are coming from. But anyway, a whole bunch of new features as well, along with those efficiency gains.

Yeah, and just to mention, less exciting but also sort of a big deal is they announced this Whisper API, which is text transcription. So it's audio to text, and it's also quite cheap. This one is open source, but yeah, again, you can run it for cheap. So it's basically like the cloud, which we've seen a lot in software all over the place.

So, yeah, it's OpenAI full-on business now, full-on money printer, it looks like, which you have been working on for a while. Good for them, I guess.

And related to that, now there's another story on how Microsoft lets you change Bing's chatbot personality to be more entertaining. So now there's this toggle for tone thing where you have the ability to do a creative, balanced, or precise chatbot that is maybe a little less personified, maybe uses not quite as many emojis.

And yeah, you can sort of tune it in this small way. And it's already rolled out to most users. So yeah, I got to play around with this Bing chatbot. I haven't gotten around to it yet. MARK MANDEL: Yeah, I mean, there's some-- the parade of freaky things that the Bing chatbot is doing seems to get longer and longer.

It's kind of a by-the-way thing, but it looks like they've now got a problem as well where Bing Chat would try to respond with something that it's not supposed to respond to, like giving people medical advice or whatever. And it won't do it in the main feedback that it gives, the main response, but then in the suggested responses that it offers to the user.

it will now put in the thing that it wished it had said, if you will. So there's all kinds of weird stuff going on. And from an alignment standpoint, you look at that and you go, okay, what the hell is going on here? But very interesting to see this attempt to fine-tune even more, hone in, dial in that alignment strategy. So you have three different options here. Creative balance, so creative tone would be, I guess, a more out there chatbot that

just spit balls. You've got a precise option, which is for this more objective kind of tone neutral chat bot, and then a balanced option, which is apparently the default setting that people will use and enjoy within your Bing chat. And I think one of the central kind of aspects of the story is just talking about how when Bing chat was first launched, we covered this a lot on the podcast,

all kinds of weird things would happen. This thing would threaten people, it would give people instructions on how to do stuff that it shouldn't and all that. And so in response, Microsoft was like, "Oh my God, we've got to just put in some hard constraints on what can happen here."

And so we saw, you know, Bing chat become very sort of neutral, refused to respond to a lot of requests. We saw them cut down on the length of interaction. So for a while, you couldn't ask more than, you know, five or six questions in a row to the chat bot. And this is now, you know, they're stepping back and kind of relaxing some of those constraints with this new update. And we'll see where it goes, but it really points to this fundamental trade-off between

How safe do you want your system to be versus how generally useful do you want it to be for any kind of task? Really interesting trade-off and I thought really interesting article. Yeah, yeah. This almost seems a little bit like, I don't know if nightmare scenario might be a little dramatic, but a scenario that some people have definitely considered where now there's a bit of an AI gold rush and there's all this competition for capabilities

kind of a low hanging fruit. And that means that from a capitalist perspective, there's a lot of motivation to push things out there and try to grab market share without being overly careful. And in many specific sectors, you might have people thinking Chad GPT should be a therapist

Which probably might not be a great idea. So I think we'll probably see more stories this year of things rolling out that are maybe a little bit broken and then being pulled back. And that'll be kind of a cycle.

Yeah. And it's funny, the political dimension of this, you mentioned capitalism and that's absolutely something that at least I've been seeing on Twitter, unfortunately, and I'm very pro-capitalist. I founded a YC startup, I've done the VC circuit. I think it's great.

But there are market failures. There are things that the market fails to account for. Climate change, if that's your cup of tea, if you buy that as an example, hey, that's a great example. Another one is AI. Another one, you're

We've got things like this for all kinds of child pornography or things like that. These are market failures. And if you buy the argument that AI poses a significant risk, then you know what? This race to build more and more powerful systems with fewer and fewer safeguards, which is part of what's happening here, it's a bit of a cause for concern. And it's funny to see people who...

I respect their input so much in the startup world, and yet when it comes to this issue, it's like blinders. We will not concede that there are market failures and that this could be one of them. But anyway, everyone has their opinion, and I'm just an idiot with mine, but it's sort of one of the through lines on Twitter in the last week that I was sort of a little concerned about. Yeah.

Yeah, and we'll touch on later how, you know, in terms of policy, regulators are going to get in here and try to maybe add some guardrails. Yeah.

you know, soon. But moving on, this is our last chat GPT story for a while. So I guess that's exciting. And this one is kind of fun. It's about how this company DID out of Israel has released a new web app that gives a face and a voice to open-aised chat GPT. So this is basically, you know,

Chatbot, not ChatGPT. This Israeli startup has its own chatbot and you can go there now and play around with it. It has a little kind of avatar that speaks and automates the speech and I guess also creates audio. And I tried it out.

It's not that exciting. It looks like a video game character kind of talking to you. It's a little bit uncanny, but they are saying that soon there will be a variety of avatars to choose from. You could also upload any images of your choice.

And they do want to say that celebrities and public figures will not be allowed. And you could also generate characters like Dumbledore. So I guess so far, this is not that big a deal. But once people start uploading photos and having these fictional characters you can talk to with animation,

If nothing else, there will be a lot of YouTube videos with fun variants of this.

Yeah, and the uncanny valley aspect of this is so interesting. For one, these sorts of avatars, often there are challenges around mouth movement. When you really focus on mouth, it's a little weird. But it's also one of those things that we don't know how many little hops or leaps are going to be required to get to photorealism with these sorts of avatars.

And so, you know, you can have the thing that's kind of like uncanny valley and it looks like a silly little toy. And like overnight, you cross some threshold and all of a sudden, you know, a whole bunch of use cases get unlocked and it's not clear where that threshold is. So I feel like this is actually one of those spaces that's underrated because it's easy to look at it and be like, ah, that's a toy. But like,

a couple, you know, more cheaper semiconductors and GPUs and a little bit more data and a little bit more tinkering. And you get to these systems that, you know, could be pretty damn lifelike pretty soon. Yeah. And I think a lot of this also comes down or will come down to how you implement it. So not necessarily announcements, but this one, when I tried it, it's kind of weird because it's this

avatar just floating in space, just a space that is so obviously not a human. But you put a Zoom background behind it, you put a weird bedroom, and suddenly it'll be a lot more realistic. We've all gotten used to Zoom, I guess, so it's not going to be that weird. So that's the thing that keeps coming up is people commenting on like, hey, people at AI, let's not forget user experience.

You're not going to get big success by just throwing a bunch of AI, so I can generate a face or a thing that looks like a face. You have to go that extra mile and still ask, how are people going to use this? In some sense, the kind of product equation has not changed as fundamentally as we might want to imagine. Yep.

And last up in our business section, we have OpenAI Rival Stable Diffusion. The maker of Stable Diffusion is seeking to raise funds at a $4 billion valuation. This is a leak, so this is not yet a new raising around. But yeah, it was company Stability that released this model, Stable Diffusion, which is text-to-image. So this is our generation.

is seeking to raise money. They've already raised last round 100 million. That has a valuation of 1 billion. And now, with AI hype, not surprisingly, perhaps, they're seeking to find, get even more money at a higher valuation.

Yeah, my thought always goes anytime I look at stable diffusion or open AI or co-hear, I always wonder about margins. Margins, margins, margins, folks. Are we going to get there? Are we going to be able to hit this $4 billion, like live up to that $4 billion price point? And very likely so. I'm just flagging something like these valuations are getting to the point where...

These are not the valuations you associate with a very high risk bet. As the valuation goes up, usually you want confidence in the margins and the scalability. Hopefully, the investors are doing their due diligence on this one. I imagine they are, but kind of an interesting dimension to keep tracking.

Yeah, stability is definitely one of the front runners in the space, text to image. And there are a lot of applications where you could see this being useful. But then again, there are already a few other players, including OpenAI. It seems to be a little, you can get away with smaller models, maybe even not quite as much training data. There's already open data sets for it.

So, yeah, I think this will be a more competitive space and it'll be interesting to see if you can get much of a competitive advantage or a moat in this space as opposed to chat GPT or something like it. Or GPT-4. Yeah, chat GPT. Everyone has been all these stories on Twitter for months about what chat GPT-4 will be. And so far, it's just rumors that are pretty unsubstantiated. It's kind of funny.

Moving on to research and advancements. First up, a story I'm quite excited about, which is kind of adjacent to ChatGPT, but also in a different context.

entirely, the story is Google's Palm E is a generalist robot brain that takes commands. So Palm is a language model that has already been published by Google last year, that is a very large language model, basically like GPT, not a chatbot, but a language model.

And now they unveiled Palm E, which is Palm with embodiment and with a multimodal language model. So giant model, 560 billion parameters that now integrates vision and language at the same time.

And is used for decision making. So if you instructed to bring me the rice chips from the drawer, one example, it can generate a sequence of actions of high level decisions of, you know, walk to kitchen, pick up something, open something, etc.

And an interesting bit here, I think last week we discussed briefly RT1, the robot transformer that dealt with low-level control. This is doing high-level control, so it's deciding on these abstract commands of go to and pick up and so on. But the way it actually executes those commands is using this learned model RT1, which is also quite flexible.

So yeah, this is showing, I think, and this is something we've already seen from last year that these language models are being integrated with robotics at a pretty good clip. This is coming soon after a paper by Microsoft titled "Chat GPT for Robotics."

And yeah, it's kind of showing that it may not be quite as hard to do this high level of reasoning with connection to the real world, with actual vision and understanding of what's around you, which is maybe a little bit surprising. And from a robotics perspective,

That's kind of a dream of a generalist robot that can reason for anything and figure out how to do anything, at least as far as what humans could do, fairly straightforward things of like go to the kitchen and make me a sandwich. And yeah, this is definitely showing that there's been quite a bit of progress towards that.

Yeah. And it's interesting because it does fall in line with a tradition of powerful kind of language model inspired robotic control schemes that goes back. Well, it goes back a really long way, but there were some early indications things were heading in this direction in 2021. I think there was a say can everyday robotics and Google came out with this model. It's sort of similar in terms of capabilities. You could tell it like, oh, I spilled my Coke. Can you help me?

and it would go and clean, figure out all the intermediate steps. And there are many intermediate steps. We just don't tend to think of them because they're so simple to us involved in executing that kind of thing. And we've got DeepMind's God02 kind of comes to mind here, right? Where we're doing very multimodal stuff

with fundamentally one just giant ass model in the middle. And also brings up this kind of age old question of what's it going to take to get to generally intelligent systems? One big argument historically has been you will need not just language, you're going to need what they call grounding, right? So you're going to need like to get this model to interact with the world, to see images, to see video, audio, kind of get it to connect

the concepts, the ideas that it can learn through language modeling to grounded reality. And for people who are kind of in that train of thought, if that's your cup of tea, this is a pretty significant next move on the path to more general forms of intelligence.

Definitely. And I think another aspect here that's interesting in this paper, actually mentioning Gato, Gato, they trained it for this huge variety of tasks for 600 different things. That was control on video games and image captioning and speech and just all these different domains.

And actually, Gatto, one of the things that people noted is it wasn't quite as good at any one of these things than when you train it from scratch to do just that task. And what was found in this paper is they had fewer, they had just three different variations of tasks that they were using on and had some fine tuning data, basically,

trained for these tasks. And they found that if you train a model on all three variants on planning and language table manipulation and some SACAN stuff, the combined data set resulted in the model being better at each one of those three things. And this is kind of good. It means that you can use a variety of tasks

and situations and environments and kind of throwing them all in together results in this overall better system that can benefit from a variety of data that's quite different. So that's an interesting result of this research.

as well. Yeah. And I think it's got really interesting implications for the scaling argument, just that like, you know, the scaling hypothesis says, obviously the bigger you make these systems, the more general purpose they become. I remember when Gatto came out the first time people were pointing to exactly this thing that you just mentioned, you know, this idea that, Hey, yeah, you know, maybe you can do like a bunch of different tasks, but it still performs less well than a system that's specifically trained for one of these tasks on the same, like kind of budget, if you will. Um,

What this seems to suggest is actually, if you keep growing the system, if you keep scaling it, you will actually get to a point where you get positive transfer between tasks. The skills that you learn by learning to drive a car really well, all of a sudden you find, oh, shit, I can actually, I'm clever enough to pull out the salient lessons from that task and apply them to playing video games or something. And so that's sort of an interesting threshold and another kind of watershed moment, I think, in the history of AI scaling.

Yeah. Yeah, exactly. I do think there are still questions where, in this case, one difference with Gato is they trained it

You have different types of inputs and you have different types of outputs. Between different video games and different tasks like language captioning or computer vision, the modalities of input and output are very different, even if you can make a single model. Here, it's all the same. Same input, same outputs, which I think does kind of...

Mud and water.

maybe that's the path forward where here, it's outputting language and then that language is being translated to motion by this RT1 model. You could conceive of instructions being translated to other domains with those other inputs by some other model. It's interesting also in that sense of this is definitely building on top of SACAN and Socratic models.

Moving on to the next story, which is pretty interesting, pretty exciting, is what happens if you run a transformer model with an optical neural network.

And I think Jeremy, you took a more detailed look at this. So what did you get from this? Well, I did. And this is a funny one just for me because in a previous life, I worked in what's called a quantum optics lab, basically a lab where you have mirrors, lenses, and lasers, and your life is miserable. And you're just doing all kinds of horrible work on grad student stipends to show quantum effects in light. And one of the guys that I met while I was doing that work at a

two-week long conference in Switzerland was the last author of this paper, Peter MacMahon. He probably doesn't remember me, but hi, Peter, anyway. This is actually kind of cool. One of the big things when you're running a transformer on your standard semiconductor hardware is it's very expensive. Processing power basically is expensive. It's energy expensive.

One argument that they're exploring in this paper is like, hey, maybe we can actually create optical circuits, basically use light to mirror some of the computations that are happening in electric circuits on semiconductors to make a transformer circuit.

As part of that, they used this really interesting tool that came out in the last five, ten years or so in optics. It's called a spatial light modulator. You can imagine here a matrix of little squares that all have different reflectivities, very roughly speaking. I'm skipping over some details here. Basically, a matrix with a bunch of tiny squares. Each one reflects a different amount of light.

So now you imagine you shine a beam at that matrix, basically, and the light that comes out carries information that's encoded in the reflectivities there. So you get more light in one spot, less light in the other, and so on. And basically, you've just encoded a matrix. And this allows you to do fancy things like the matrix multiplication operations that are present in transformer circuits.

And so using this technique, they basically show that, hey, you know, we think we can make optical circuits that run transformer algorithms for cheaper and actually like potentially significantly cheaper. So they experimented with simulations and they showed that if you scale it to blazes, you could get like around 10,000 times more energy efficiency there.

out of these systems than current state-of-the-art digital electronic processors. So I thought that was kind of cool. I don't foresee this becoming a thing in the near future, but still kind of interesting that optical circuits, some tangential relevant to transformers at the very least.

Yeah, definitely. And this also in a previous life. Well, I didn't work on this in detail, but I did have a bit of work on neuromorphic circuits during an internship, which is kind of a similar idea in a way of...

Instead of having neural networks run on GPUs, where GPUs are these general purpose chips you can program to do lots of different stuff. You can use it for video games. You can use them for neural networks. Neuromorphic architectures seek to basically have a more brain-like approach.

implementation, typically with spiking neural networks. And the outcome is similar in that it's much more energy efficient. And that's one of the major differences currently with neural networks is they're not nearly as energy efficient as our brains. So, and it's kind of interesting because so far,

All the work has been done on these GPUs just because you can run any model on them. And that's kind of been the pattern in AI is you train a new model for every new task with different data sets. But now with this in-context learning, with these massively self-supervised language models,

first of all, because of their scale and second of all, because they are useful for so many things just out of a box in a way, you could see, you know, custom hardware being built just to run a particular train model where once you bake in the weights, they're baked in.

And maybe it's faster, maybe it's more energy efficient, but you don't have that flexibility of making it programmable for anything. And I am quite curious whether we'll see that emerge in this coming decade as more and more stuff is driven by these large language models. Yeah, it's also the homogenization of these models. Like over time, everything is, at least for now, everything seems like it's becoming a transformer.

And so it's interesting, like that creates this economic incentive to go maybe, you know, take advantage of some of the unique structural properties of transformers to get more efficiency. And, you know, like NVIDIA's H100 does this, but like increasingly I expect we'll see a market for, yeah, that more tailored stuff, whether it's neuromorphic, optical or some other kind of like integrated circuit design. Yeah.

Yeah, exactly. And it's early days. The scales of neural nets here aren't that big and there's a lot of issues to work out in a way that's similar to the history of transistors where they used to be gigantic back in the 50s and 60s. The CPUs were less powerful than present day calculators.

I'm curious to see if we'll see a similar pattern of these sorts of hardware techniques starting out as very impractical, very small, but as you improve on the tech, you can actually start having billion parameter models, which, yeah, is interesting if that's where we go. We have a new computation paradigm that's, you know, we've been using the same paradigm more or less since the 40s,

the Von Neumann architecture. Here, this is analog computing. This is no longer using bits and bytes. Historically, I guess if you're a nerd about technology, it's very interesting.

Yeah. Also, potential for displacing your established industry players. Anytime you have a new technology like this, people talk about quantum computing in the same vein, but the people who have historically, the companies and even the countries that have historically led the way on certain kinds of processors might all of a sudden find, oh shit, there's a new wind blowing and it may be coming from another set of actors. Yeah.

Yeah. And speaking of that, in our lighting round, the first story is scientists now want to create AI using real human brain cells. And this is on a new paper that is more of a position paper, no real results. And they are trying to coin this term organoid intelligence.

which is biocomputing and intelligence in a dish where you basically want to grow a little

biological computer that is using bioengineering advances to be able to develop these brain organoids, which are, I guess, kind of like chips, kind of like transistors in a way. And that's another possible avenue forward where maybe you could...

you know, bake in something like chat GPT or, or something like that in biological hardware, which as we know is, is much more efficient and dense in terms of what it can do. Um,

This is even more early on and I think is probably less promising just because, I don't know, growing brains does seem pretty far out. But that's just showing, I think, there'll be a lot more investing in these new computing paradigms.

Yeah, and more exploration too of the mapping between the human brain and AI systems. It's an interesting debate that we have perennially in this field where people will say, "Oh, well, your artificial neurons, these very simple mathematical structures, they don't capture the full depth and complexity of what happens in a real neuron." I don't know, this might give us a little bit of a lens on that too. Can you get simple neural circuits

with real biological neurons to do things that are more complex than the same number of, say, artificial neurons set up in the same way. I don't know if that's going to be feasible in the near term, but it's kind of an interesting, almost philosophical question that comes to mind with this stuff.

Definitely. And I think from an AI researcher perspective, it's like, well, if we have these biological neural nets, will we be able to train them? Can I do my back propagation? I know. I guess humans learn somehow. That's true. Yeah.

Next story we have Google is one step closer to building its 1,000 language AI model. So this is about a new paper titled Google's USM, Scaling Automatic Speech Recognition Beyond 100 Languages. So this is speech recognition. You have audio and you want to figure out the text of that language.

And yeah, they announced that they want to be able to support the world's 1,000 most spoken languages. And this is a paper that's showing progress towards that. We have a model that supports over 100 languages.

which is big, of course, and has a lot of data. This historically has been a harder area to make progress in because having annotated audio with text is needed. You can't really do self-supervision in a way that language models do. But yeah, okay, we are now making some real progress.

Yeah, it ties into language modeling, I guess, in an interesting way. I remember Jan Leike just pulled it up. So this is the head of the AI alignment team at OpenAI. So he tweeted this out like earlier this month, or no, last month, February 13th, 2023. He said, with the instruct GPT paper, we found that our models generalized to follow instructions in non-English, even though we almost exclusively trained on English.

We still don't know why. I wish someone would figure this out. And so there's this... Oh, sorry, go ahead. Yeah, that's interesting. It reminds me... I vaguely remember in some previous work where...

I guess speech patterns, I remember seeing this where if you train on a lot of English and you train on a little bit of Spanish, learning to speak in English seems to translate to making it easier to learn other languages. So that's quite related. Yeah. And it's all part of this debate over, you know, that people always have about, well, did your... So your model might be good at generating text, at predicting the next word, but

But does it really understand, right? And people who say, well, it's really just a statistical inference model, right? Like as if the brain is anything different, let's say. But in this context, it's sort of this interesting mystery for people who take that view. Like, how do you then explain this? You may be able to, I'm sure there are interesting explanations, but this at least was being pointed at by, you know, in this case, Jan Leike at OpenAI that,

hey, this seems like a genuine mystery. Another one of those weird blessings of scale. You just scale up these models and they just seem to figure out how to do shit. And we don't really know how, but yeah, many mysteries still left, even though we're getting on for three years in the AI scaling game.

Yeah. Yeah. Even you could argue a decade in scaling. Right. This is now, there was a paper on scaling that argued that there was like actually two phases where like the 2010s was a period of scaling. And now in the last few years, we entered a new area of scaling that's like even faster. So... Yeah, yeah, yeah. Yeah. Yeah.

Next up, we have AI masters video games 6,000 times faster by reading the instructions. This is about the paper Read and Reap Rewards, Learning to Play Atari with the Help of Instruction Manuals. AI does like these piffy paper titles, which is fun. And yeah, I like the...

Title says the idea here is that instead of learning from scratch with trial and error, which is the typical paradigm for reinforcement learning, you start out knowing nothing about the task and you just try different things until you figure out what works and what doesn't.

Here, the idea is, well, for games, for a lot of stuff, you have instruction manuals that explain the rules of the game and the goals of the game. So there is a slightly fancy approach to be able to translate the text of an instruction manual to the visual language of the game and the actions you can do.

And perhaps unsurprisingly, if you read the instruction manual, you can learn much faster. And I was quite excited seeing this because it's such a common sense idea of like, don't start assuming you know nothing. You can actually be told what the task is and do it. And there's been other work on this throughout the years. But I think as a paradigm for reinforcement learning, this is hopefully going to become more of a norm.

Well, and I think this is a little bit of an I told you so moment for you, Andre, because as I recall, you've raised this on a couple of occasions, just this idea that, hey, this is a direction that probably would be fruitful to explore. So it's kind of cool to see it materialize like this. And well, good on you for not being too...

to rub your face in it about it. That's a really interesting result. The power of priors, right? How many times do we try throwing our models at something from scratch, treat it as a narrow problem in that sense, and train just for that one task, and we find we don't get as good performance as a model that has a little bit of a leg up, whether through pre-training on a bunch of different tasks

Or whether through kind of direct instruction, this kind of seems like it's along that continuum, like a bit of pre-training in a sense before the main task is attempted.

Yeah, in a sense. There's some caveats here where the games they did were pretty simple, but still, you can imagine this being generalizable, where if you have instructions, now you can connect those instructions to a visual semi-embodied world. And you could even argue this is related to that paper on robotics we had, where you

Say in text what you want and you get that in practice instead of just learning from scratch. And last up in this section, we have artificial intelligence from a psychologist point of view. And I think Jeremy, you can tell us more about this.

Yeah, well, I thought this was kind of a bit of a cute paper. It uses GPT-3, so, you know, a little bit out of date. There's really one main thing that we're trying to get at here, and that's like looking at the failure modes of AI systems and comparing them to the failure modes of human cognition.

So is it the case that, for example, a language model like GPT-3 will make some of the same mistakes that human beings would make on a psychological test? And the example that really jumped out at me here was something called the Linda test. And so this is where you have test subjects and they're introduced to a fictional young woman named Linda as a person who they're told is deeply concerned with social justice and opposes nuclear power.

Based on that information, those two pieces of information, so Linda is really concerned about social justice and she opposes nuclear power, the subjects are asked to decide between two statements. One, is Linda a bank teller?

Or two, is she a bank teller and at the same time active in the feminist movement? Now, the thing is, if you're familiar with probability or have any intuition about this, you might know that no matter what the context is, the odds that someone is going to be a bank teller

is always going to be higher than the odds that they're going to be a bank teller and that another thing is going to be true, no matter how likely that second thing. Because the probability for that second thing is going to be a number between zero and one. And so that can only make the overall probability lower.

And so what was interesting here is that GPT-3 makes the same mistake here that a lot of humans might. It says, oh, just based on this background that she's against nuclear power and cares about social justice, I'm going to bet that not only is she a bank teller,

But also, she is active in the feminist movement. So sort of interesting. It's the intuitive answer that comes to mind to all of us before we really start to think about it. Interesting to see it reflected in GPD-3 at this scale and stage. And curious to see, is this something that persists with more and more powerful models?

Andre, I'm sure you've heard of this, but the inverse scaling prize, this is something that AI safety people have been on about. And it's this question of like, can we find tasks that AI systems perform worse than

on at larger scales. And this seems like it could potentially be an interesting candidate, right? Like the kinds of mistakes that human beings make, the better our AI systems get, the more scale they get, maybe the better they replicate the mistakes of the human beings whose data they're trained on. So interesting question, but just thought I'd flag because it seemed like a fun one.

Yeah, it's kind of fun. And this test has been around for like 40 years. So there is a rich history of research. Things like if you word the question differently, you might get different results. I have seen a kind of interesting point that Gary Marcus has made on this sort of test, where he argued that if using GPT-3,

It probably knows about this test, right? So is it just saying what it has seen or is it actually responding from scratch? And that's kind of a fun question. Yeah. The all-important safety question of like, what does your AI model actually believe about the world as distinct from what will it tell you it believes? Exactly.

Moving on to policy and societal impacts stories. First up, we have OpenAI is now everything it promised not to be. Corporate, closed source, and for profit. As many have possibly seen, it's been a little bit of a joke for a while that OpenAI is not open now. They have stopped

publishing models, code, even papers. More recently, there's no paper on ChatGPT. And they did change from being a nonprofit to being this kind of strange combination of a limited for-profit something. And yeah, now is probably a good time to reflect on how OpenAI has evolved. What are your thoughts, Jeremy?

Yeah, well, I thought this was an interesting article because it really reflects the fact that there are two parallel universes in interpreting what open AI is all about and the reasoning behind what they're doing. There's kind of like a cynical take and there's a charitable take. And I think there's an interesting argument about whether the truth lies somewhere in the middle here.

The sort of cynical take, which is adopted by this journalist who is writing this post, not the first journalist to rip on OpenAI. The cynical take here is like, look, you launched in 2015 or whatever it was to the world. You sold this very optimistic message. You told the world you'd be open and publish all your stuff because you were worried about concentration of power. That was like the key thing that you wanted to avoid. And that's why the name is OpenAI.

But now, all of a sudden, we're seeing you close up, become a capped for-profit. They often emphasize the for-profit part. They don't emphasize the capped part, which in the more charitable interpretation is actually a really important part of the story. But in any case, they go on from there and they say, hey, you basically sold out and now you're pumping out these models. Elon Musk is famously now on this side of the equation saying,

look, you're just totally for profit, pumping out models at scale or whatever with profit in mind. I think the more charitable take, which again, I think has an important component of truth here, is if you're open AI, you launch in 2015, your mission is to make AI openly available to the world, avoid concentration of power and avoid the catastrophic risks that come with AI.

Now, those three things cannot coexist. And over the course of the next few years, you gradually start to learn that. You start to realize, "Hey, we can't have arbitrarily powerful AI systems just released to the world in the way that we might otherwise want to if these systems are intrinsically dangerous or if they can power malicious applications." That's kind of one ingredient. Then, worse still for you, you discover, "Oh, shit.

AI scaling with GPT, let's say GPT-2 actually, maybe the first instance of that, but really GPT-3, AI scaling seems to be a real thing. It seems like we now know what it's going to take to push capabilities much further, and it's going to cost a lot of money.

And so now, we have to kind of look at our nonprofit status and say, "Is this really sustainable?" We need to find a way to get money in the door. And so, they adopt this cap for profit structure where they say, "Yes, we have to be profitable because we've got to be able to raise money from investors to fuel the scaling that otherwise is totally unsustainable. Let's at least cap the profits that our investors can make at 100-fold."

So Microsoft invests billions of dollars, they can only ever get a 100X return. All the rest essentially gets funneled to the OpenAI nonprofit and for presumably redistribution down the road. And so you can kind of see like OpenAI trying to walk this awkward tightrope. You can have a little bit of sympathy for them at least as they go through that process,

But on the other hand, it's also true that, yeah, there has been a reversal on a lot of these public positions. They talk about it a lot in their recent post, preparing for AGI or something. Anyway, preparing for AGI and beyond or something like that. Anyway, that was my two cents. I think there are two parallel worlds there. I think there's some truth to both. They're both interesting to think about. Yeah, I would say I agree where...

It's interesting timing looking historically where they switched this nonprofit to this capped profit model in 2019. And this pretty much happened before, like as they were making this shift.

to mainly working on GPT type stuff and scaling things up, right? They released GPT in 2018, they released GPT-2 in 2019, and GPT-2 at the time was quite big at a few billion parameters, right?

And then they changed their status and they got this billion dollar investment from Microsoft. And I think, yeah, if you're thinking about hypotheticals, I don't think we'd have Chad GPT without them having changed to a capped profit model. And I think there are some arguments in both cases.

directions where being closed on the one hand is not great because now you have a lot of more concentration on basically you are able to dictate what people do and maybe you can misuse the AI. But on the other hand, I will say that I've found that OpenAI has been pretty careful in making sure the models are not misused.

So with DALL-E, for instance, they didn't release the model. And if you try and use DALL-E via their API or via the website, they do prohibit you from making deepfakes and porn, for instance.

And similarly, you have Chad GPT, right? It's built in with guardrails against racist speech or misinformation, intentional misinformation.

So part of the initial vision is definitely there, where the goal is to guide development of AI in a direction that is broadly beneficial to humanity, even if it's no longer quite as open to kind of the larger sense of humanity being able to guide it. Yeah, so I'm quite sympathetic.

To OpenAI, I think there is a good argument to be made that while it might be a cynical perspective that they decided to just switch from nonprofit to profit, from an AI safety perspective, there has been this camp that said, let's develop the models ourselves so we can guide them in a good direction. And that really seems to me what they're doing.

Yeah, and that take in the safety community is controversial too. We can certainly say that, but I think from where these intentions come from, I don't think Sam Altman is sitting there in his evil lair thinking about how can I –

how can I close off access to these models and like, you know, make a ton of profit. Um, I think opening eyes consistently invested in Sam a actually personally is consistently invested in a bunch of projects aimed at, you know, sharing the wealth and figuring out how to do that properly. But, um,

But it's a tricky tightrope. I mean, I don't really particularly begrudge anybody their opinions on this. And OpenAI's publication strategy and the fact that they've been so open and they've created so much hype around these models is highly controversial in the AI alignment community itself for accelerating AI progress in a context where we don't know how to make these systems safe. And so...

Yeah. There's so much to dig into and maybe at some point we'll do a longer discussion about it, but I think we got a lightning round. Oh, sorry. No, we got one more news. One more main story and then we got a lightning round. So moving on, the next story is as AI booms, lawmakers struggle to understand the technology.

So yeah, obviously now everyone knows about Chet GPT, everyone knows that this is a big deal. And maybe unsurprisingly now there's a rush for lawmakers, policymakers to understand what to make of it and how they can impact or what should they do in this sense. Yeah.

Yeah. Yeah. Everyone listening here will be shocked. I'm sure absolutely shocked to learn that politicians do not totally understand what AI is. Um, and, and this was an article yet that referenced a couple of lawmakers who were sort of frustrated. They're going around, they're trying to tell their colleagues in Congress, like, Hey, uh,

Bad shit is happening. We need to regulate this space now. And the biggest blocker they're running into is exactly that. These are the people who famously asked Mark Zuckerberg how he makes money and didn't quite understand the answer. And so this sort of thing, but applied to AI, does cause problems when there's a lot of technical nuance in this space.

But yeah, they talk about this idea of companies being kind of locked right now in a race to the bottom on safety. That's the thing that seems to be sacrificed as everybody rushes to get Bing chat out the door or whatever else.

One of the things that I found maybe a little disappointing in this article was that they really kind of blurred the lines between different kinds of concern. They kind of mixed them together. They talked about ethics in the same sentence as they talked about physical safety from AI systems and malicious use and catastrophic risk and all this stuff. It made it a little bit difficult. If I'm imagining myself as a policymaker trying to understand this whole space,

How do you tease apart these issues? And if you just kind of group them together, I don't know. That was just my personal take. I thought the article on the whole was still an interesting little expose. Yeah, I think it's a good kind of thing to reflect on. Obviously, companies don't have much incentive helping out the government to regulate them.

But this also reminds me that this is where academia can play a good part, where at Stanford, there has been the Human AI Institute.

HAI for a few years now, and that's meant to be an interdisciplinary group where you can have policymakers, lawyers, people from different fields, even philosophy, collaborate on really researching AI in a broader context beyond just the technical details. And they have even published policy briefs.

So this is the sort of thing where I think maybe similar to other areas of technology, it's not going to be the companies driving the discussion around regulation. It's going to be groups that are not financially incentivized and can break it down technically.

Yeah, that's so true. I mean, it makes you look at the ecosystem right now and wonder like, okay, are the players at the table, all the players that we need at the table? And right now, I'm guessing the answer is very much no. So yeah, hopefully academia can step in.

Now we can go to the lightning round. First up is pretty much a follow-up to that prior story with how Chad GPT broke the EU plan to regulate AI. The broke might be a little bit dramatic, but it definitely has impacted it. We've mentioned it a couple of times. Europe has been working on the AI Act, which is really a very ambitious effort to introduce regulations

And basically categorize risks of different AI systems and then have developers and companies be liable for releasing things that don't have safety constraints, for instance, don't have transparency and human oversight.

It doesn't look like there's much consideration for things like chat GPT.

in the AI Act. So there's some effort to address it. And there's some argument as to whether in this kind of very broad technology of a chatbot, should this be classified as a high risk thing where you are required to be transparent and have human oversight? Or maybe it's too general a technology to be able to categorize it as one thing or another.

Yeah. It's one of the properties of these systems as well that you can build a really powerful AI system with a specific use case in mind, but if it's a generative model or a general purpose AI system like ChatGPT or GPT-3,

It will have a whole bunch of capabilities other than those you think it has, other than those you have in mind. And so you might be making like a perfectly, you know, innocent psychotherapy chatbot. And then you learn, oh, wait, shit, this actually has the ability to manipulate people. It has the ability to, you know, plant ideas in people's heads.

And so, yeah, really not clear what high risk ought to mean, but it certainly seems like it should capture the potential uses of these systems and not their intended uses. And I think that's one distinction that's really hard to get around because, you know, like...

Uranium isotopes are useful for many different things. If they were just useful for nuclear power, hey, maybe it would make sense to say that they're not high risk, but they could potentially be used for other things that are less tasteful. In the same way, we buckle down our uranium, maybe the same ought to be said for very powerful AI systems of the sort that we're building now and in the future.

Yeah, I think that's where it might not be that far from what the act is, where it is focused on uses of AI. So it's, for instance, facial condition for policing. And in that sense, I think, you know, if you're using chat GPT to make a therapist, that's pretty easy to classify as high risk.

So it might not be that big a jump, but it is kind of tricky to say, well, this is kind of a base technology that a lot of things build on top of. Does this base technology already need to incorporate safety or is it the downstream players that need to think about it more? But yeah, legal stuff is very much also catching up.

Another story we have here is about how the UK Supreme Court hears a landmark case about a patent case over AI inventions. And we've already discussed this, I think last year where

uh stephen fowler has tried to do this in the u.s where he argued that you know nai he developed should be granted patents here the same outcome the supreme court ruled that you cannot list nai as you know the inventor of something um still a bit of a

you know, maybe not that big a deal for now, but, uh, now with shared GPT, you know, maybe you could argue that at least it's collaborative in some sense, right? Yeah. Yeah. It, uh, it reminds me of, um,

adding myself as a nerd here, it reminds me of an episode of Star Trek: The Next Generation, in which there's a debate over whether or not Data, who's the android on the ship, should have rights as a person. And all this stuff sounded nice and science fiction-y back in the 1990s, but today, oh boy, it kind of makes you wonder. And it brings all these other questions in about AI sentience and things like this. We're not that far from being forced to contend with these questions.

But the legal one is interesting too, because presumably if you set that precedent, especially in the UK legal system, which is very, very precedent oriented, if you set that precedent that today's AIs are not going to be considered as possible inventors of patents,

Like, presumably, that's going to have an impact on your future rulings as these systems get better. And so it kind of makes you wonder, like, what kind of anticipatory thinking is going on here? Like, are they thinking about, okay, maybe not this system, but what about the systems of 2025 and on? And so anyway, sort of interesting to think about the future in that sense. Definitely. Yeah.

Moving away from legal stuff and policy stuff, we have a couple stories on societal impacts. First up, we have how a couple in Canada were reportedly scammed out of $21,000 after getting a call from a generated voice pretending to be their son.

So pretty much what it says, there was someone who claimed to be a lawyer who said their son was in jail for killing a diplomat. And there was, I guess, there was this fake voice of their son.

And yeah, they got scammed. And this is something we've seen already in a couple of cases of there was, I think, also a scam on a bank previously. And now with 11 labs making it easy to clone anyone's voice with just a bit of data.

We'll see a lot of this, I think. This is just a goldmine for phishing and for scammers. So already we see an example just a month after Eleven Labs kind of became big concerning

Yeah. And the speed at which these techniques are evolving too, you know, it's like it used to be, you know, you'd go into your, like your grandparents place and then get, you know, they'd have someone on the phone and they'd just not be quite sure. And they'd be like, Hey, Andre, like, can you tell me, like, is this suspicious to you or something? Or they'd show you an email and the same thing would play out. And you'd kind of go like, okay, boomer, like, let me just fix this problem for you right now. And you'd explain to them how it all works. But like with the rate at which this stuff is evolving, it's,

Basically, unless you're tracking the cutting edge of AI all the time, there are going to be these new like...

vectors, these new phishing attacks, these new malware schemes, whatever, that are going to take you by surprise. And all of a sudden, it's not going to be good enough to just be like a Zoomer or a millennial or whatever. You're going to have to actively put in effort to understand what the landscape of capabilities is. And there is no better way to do that, ladies and gentlemen, than to subscribe, like, share, and comment on the last week and a half. Sorry. No, anyway. Yeah.

Yeah, yeah, yeah. You know, being informed is good. But this also reminds me, you know, possibly like many people I've had some experiences with scammers and not to generalize, but in some cases it wasn't too hard to figure out. It was like someone not from the US who had a pretty heavy accent. Now these operations running from outside the US where they want you to send them Bitcoin,

Can be a lot more, yeah, a lot better at fooling people. So yeah, definitely concerning. And speaking of filters, the last story here is how TikTok's trendy beauty filter ushers in new tech and new problems. So now there's this filter called bald glamour.

which has now been downloaded more than 16 million times since its release. And, you know, there's been other filters for improving your appearance and your face, but this one is pretty striking because it really changes quite a bit your appearance. And, yeah.

It's been done a lot. It, I think, is adding to a problem we've seen a lot of on Instagram and elsewhere of just pushing for these unrealistic standards and pushing for, you know, the notion that you have to be beautiful, right? And this false reality that Instagram and now TikTok are conveying of, you know, this is how people look. They look beautiful.

glamorous or things like this. And yeah, again, kind of concerning. Yeah. To me, this one is like an entire philosophical social debate in a beautiful, tightly focused nutshell that nobody can really ignore. How much of you do you want to preserve when AI can change what you appear to be?

Like, this is an important question. It's not something you can glance over, right? We wear makeup, we make small changes to ourselves. And that's okay. But there's like a continuum there, a slippery slope. And at some point, presumably, we decide, okay, that's no longer who I am.

And like, how far do we want that to go? And here we're kind of being confronted with that. And the weird thing is that, you know, maybe 16 million people feel the answer is we take this as far as it goes. You know, what if this changes your skin tone completely? What if it changes your whole face so you're unrecognizable? In a sense, what have you even achieved there?

Like, to me, it's, I don't know, it just seems like such a fascinating question about what humanity is going to want to turn into once basically anything is possible. There's almost like weird transhumanist dimension to it in a way. And it's yet just focused on such an innocent seeming or at least a kind of an innocuous seeming like tech product here.

Yeah, yeah. This is getting into some pretty deep territory of also with respect to online in general, like what is the boundary between a character you're playing and yourself, right? Right. There's like terms like hyper-reality and all this discussion around that. Pretty interesting question. And stuff like this, yeah, we'll just keep adding to it.

But moving on to some art and some fun stuff, not all fun stuff. We got a few more stories to round things out, starting with AI-generated fiction is flooding literary magazines, but not fooling anyone. This is following up on a recent story on how everything

A renowned magazine, Clark's World, which is focused on science fiction, closed submissions due to being flooded with AI submissions. There's a chart showing in February, they got like a thousand submissions. It's almost a thousand-fold increase or a hundred-fold increase.

And this is pretty much spammy. It's just people trying to make a buck for the most part. And now there's been other publications that have gotten the same problem. And these used to be open to anyone to submit to. These are kind of avenues for people starting out who haven't been published in any real way to publish short stories online.

So this is, yeah, kind of concerning where it's another case of if you have people who want to make a quick buck or don't really take this stuff that seriously. Now, if you are someone who is trying to write something serious, these avenues are being kind of made more difficult to take part in.

Yeah, and to stretch the discussion that we had just a minute ago on kind of remasking yourself visually, you know, how about thinking of this as another kind of mask that you can wear as an author? You know, you're not actually showing your true face, your true self, your true writing. You're just like putting out

a bunch of chat GPT text and calling it your own. At what point do we start to say, "Okay, well, we don't want to be reading content that is like that." I mean, obviously, we're going to end up doing it. But anyways, I think it's another interesting dimension to that. And just so scary as a time for people in the publishing business right now. I mean, I've

I've got some friends who are doing different things in that space. I've got a book coming out soon, actually. I hasten to mention. I almost forgot about that. But anyway, you just look at all the people who work in this space, the copywriters, the copy editors, the people who do marketing, all these people across the spectrum and what their jobs are going to look like, not just because they'll get automated, but because they'll get flooded with this kind of product.

I don't know. I mean, we're going to need to rethink the whole architecture of this space, I suspect, just to handle that volume. Yeah, I agree. I think I'm not quite as concerned because effectively this is spam, right? And we have had spam filters. This is harder to detect, but it's possible. This article noted that

they could basically reproduce some of these entries where there were 20 short stories titled The Last Hope. And if you ask Chad GPT to write a short science fiction story and then copy paste the instructions to this magazine, it generates things like the last echo or the last message or things like that. So I feel like...

And also this is a new, like ghostwriting has existed. There are services where you can pay people to write things, really pay them very little and then publish it and try to make a quick buck. And that's been going on for years. So this is really just making it easier.

And hopefully it'll be harder for these platforms, but I think you could probably automate detection of a lot of this.

Yeah, maybe the reason for my pessimism is scaling. I just see this as like right now, these are relatively easy to detect. It's still more of a pain in the ass because you've got this extra due diligence step. But even today, detection mechanisms are pretty shaky and I would expect generation to beat discrimination in the long run as these language models get more and more human-like. So it's more like this is a bit of a harbinger of what's to come.

And I don't expect the problem to get any easier to deal with. I'll put it that way. Yeah, I agree. I think maybe it's hopefully right. It's going to be a case of if you're generating garbage, generic nonsense, even if you get past it, you know, it won't, people won't care or won't look at it.

But we'll see. And next up, quite related to that, another story is ChatGPT launches boom and AI written e-books on Amazon. So there's an example here where someone wrote a 30-page illustrated children's book in a matter of hours. And now there were over 200 e-books in the Kindle store listed with ChatGPT as a co-offer.

with things like power of homework, uh,

And yeah, this is again a case of people wanting to just make a quick buck and spamming effectively. Because this was in at least one of these articles where they were talking about how it's all these get rich quick YouTubers who are like, hey, let me show you how to make the fastest. The same people who are telling you to do drop shipping on Amazon or whatever to make money. And now this is just the new thing. And like,

Thank you for adding friction to society, guys. This is really wonderful. But yeah, I mean, it's your point. Right now, they're low quality. At a certain point when they become high quality, flip side is like, hey, we just get really good AI generated content. So maybe that's just intrinsically good. But it does make you wonder, how is the publishing business going to change in light of this? Are books even going to be a thing? I mean, I don't know. Kind of interesting to think about where this all goes. Yeah.

Definitely, yeah. But moving on to some more positive stories to round things out, we have the first story is Hollywood 2.0, how the rise of AI tools like Runway are changing filmmaking. This is not a super in-depth story, but I did find it quite interesting to find out that the movie Everything, Everywhere, All at Once, which I think many people maybe are aware of, it's been winning tons of awards,

And it's a very visual effect heavy movie that had a pretty low budget and only like six people. So it goes into how in doing the movie, the visual effect artist used Vistul Runway, which is a company that produces AI powered video editing software. And, you know, not just editing, but also various things like masking and

And things like that. And I found it quite interesting that now this major movie that might win an Oscar was partially made using this runway tool.

Yeah, and the effect that might have. I mean, you think about what YouTube did to kind of the democratization of content creation hosting. Like, you know, I wonder what other indie movies that start to look like they're blockbuster budget movies. You know, what might that look like? And then the opportunities for like artistic expression and all that stuff. Really kind of exciting, exciting time to be a movie fan. Yeah, I think it's...

A lot of this, I think, isn't necessarily going to enable things we haven't done before. We've had visual effects for a while. Marvel is just mostly computer-generated horror movies. But now it's going to be more affordable and easier to do things like that, which is exciting. Yeah.

And final story, we're going to loop back to something we touched on a few weeks ago. The AI-powered Seinfeld spoof is set to return to Twitch with new guard rails in place. And I think it might already be back.

So again, kind of a funny story where there is this never-ending Seinfeld spoof of Seinfeld doing kind of stand-up that initially went wrong. It made, I believe, racist or transphobic jokes.

And that was interesting. I think in this article, it goes into the company's reaction and how they thought there were moderation going on in OpenAI's API. And that was not the case. And now they are relaunching this with much more of a...

careful approach of making sure that there is moderation where we are working with OpenAI to moderate and also with our own team.

And thank goodness that this Seinfeld, AI-powered Seinfeld thing is back online because the internet needs that, I will say. No, I mean, it's just really interesting to see also the question there as between who owns the responsibility? Like if they thought that OpenAI was doing this screening and then they weren't, and as a result, like they incur brand damage on their project, it's like, well, who?

Like, was that your fault for not checking in? Was it OpenAI's fault for not being clear about what was happening in the back end? Like, all kinds of interesting questions anytime AI starts to do a lot of our thinking for us. And the positive thing is that we now have Seinfeld back on the airwaves. So very, very pleased about that.

Yes, now that it's back I think I'll actually check it out. I'm curious to see what this thing is. And you know, it's 24/7 so you could go to Twitch and find it, I assume.

Well, with that, we are done with this week's episode. Thank you so much for listening. As usual, as we mentioned, you can find our text newsletter at lastweekin.ai, which has many more articles. And if you like a podcast, we'd really appreciate it if you share it. And especially if you could

give it a rating and a review on something like Apple Podcasts. I actually do read them. There was a recent one that mentioned that

AI assisted editing that I did was actually annoying it was like clipping things so I reverted to doing you know manual editing having tried that which was kind of funny so yeah you know if you have suggestions or if you just want to let us know that you appreciate the podcast we would appreciate that

But, you know, if you don't have a time for that, we do appreciate everyone listening and we hope you will keep tuning in as we keep covering the news.