cover of episode Apple’s Big Reveals, OpenAI’s Multi-Step Models, and Firefly Does Video

Apple’s Big Reveals, OpenAI’s Multi-Step Models, and Firefly Does Video

2024/9/14
logo of podcast a16z Podcast

a16z Podcast

AI Deep Dive AI Chapters Transcript
People
J
Justine Moore
O
Olivia Moore
Topics
Olivia Moore:Apple Intelligence在媒体搜索和内容创建方面取得了显著进步,但Siri语音助手仍需改进。一些功能此前已存在于独立应用中,并非革命性突破。通话录音和转录功能满足了特定用户群体的需求,但其质量有待提高。苹果公司开始认真对待内部开发AI原生功能,这可能会对一些初创公司造成一定影响,但对更具雄心的公司影响有限。 Justine Moore:OpenAI的新模型在多步骤推理方面取得了重大进展,这将促进教育类应用的开发,并改变人们的学习方式。多项选择题并不能有效评估学生的真实理解,AI技术将改变数学教学方式。不同公司在AI模型开发方面存在差异,一些专注于通用智能,另一些专注于情商。目前尚缺乏对情商的衡量标准,但用户自行创建了非正式衡量标准。AI模型本身并非产品,而是嵌入在产品中。 Olivia Moore:Spotify Daily是AI模型成功应用于产品的案例,其成功在于其非侵入式且令人愉悦的用户体验,而不是其模型的准确性。产品的用户体验、易用性以及社交功能等因素,对于产品的成功至关重要。不同领域中,最佳模型与可用模型之间的差距有所不同。对于使用开源模型的公司来说,微调模型的重要性取决于具体的产品和用例。在企业领域,微调模型的需求可能更高。Adobe Firefly在将AI集成到现有产品方面做得很好,其视频模型专注于专业视频创作者。Adobe可能会在其产品中集成其他模型,这将为初创公司提供新的分销机会。Adobe允许外部公司在其产品上构建插件,这为初创公司创造了机会。Adobe展示的模型更侧重于对现有内容的改进,而非从零开始创作。

Deep Dive

Chapters
Apple introduced several AI-powered features in their latest iOS update, including visual intelligence, enhanced photo editing, and advanced media search capabilities.
  • Apple Intelligence is now native to the iOS Operating System.
  • Features like visual intelligence and enhanced photo editing were previously available via standalone apps.
  • The most game-changing feature is the advanced media search, allowing natural language queries across photos and videos.

Shownotes Transcript

Translations:
中文

It's not what a human curator probably would do, but the weird is almost a feature instead of a bug.

Apple feels like the only company that could actually make that happens since they have kind of permission and access across .

all the apps is our model that trained on this a very limited data, like truly the best model for making that image or video.

Putting my investor head on, this is not a case where I necessarily worried for startups.

This was a big week in technology. For one, apple held its annual flecked event in kuper tino, releasing new hardware models of the iphone, ipod s and watch plus a big step forward in apple intelligence. We also had new models drop, including open eyes of one models focused on multi step reasoning and adobe sneak peek of their new firefly video model.

So in today's episode, we break down all this and more, including new AI features from spotify getting seventy percent week over week feature attention, I Q VS, E Q benchMarks, and of course, what all of the signals about what's to come. This episode was recorded on thursday of temple twelve in our a six teen s office deal partners Justin and Olivia more. Now, if you do like this kind of episode, or we break down the latest and greatest technology, let us know by sharing the episode and tagging his cy.

Let's get started. As a reminder, the content here is for informational purposes only, should not be taken as legal, business tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any asic cy fund. Please note that a six sense e and euphoria az may also maintain investments in the companies discussed in this podcast. For more details, including a links to our investments, please see a sixteen ca com slash dispose.

Earlier this week, on monday, apple unveiled a bunch of new products, iphone sixteen airports, four to watch series ten, but also apple intelligence, right? So a bunch of hardware, but seems like they're upgrading this office stack. Olivia, you got early access to this is just right.

Yes, IT was actually open to anyone, but you had to go through a bunch of a step. So with some of these new Operating systems, they offer early access to what they call developers. But if you snoop around on your phone a little bit, anyone is able to downloaded and get access a few weeks ahead of the rest of the crowd. I think access to IOS eighteen should be launching september sixteen th for anyone within iphone fifteen or sixteen.

okay. So you've been playing around with this for two weeks or so. Apple intelligence, yeah, what did you find? What are the new features? Maybe just break them down first? And then where do you find the really inspiring while this might change the world kind of staff? And then where maybe you think things are fling short?

So apple intelligence is a set of new AI powered features that are native to the IOS Operating system. So they already built in to the apple apps into the phone itself. And we've heard that they might charge for IT down the line, but at least right now, it's free to everyone who has I O S A teen, which is really exciting.

A lot of the features, honestly, were things that have been an available via standalone apps you had to download and maybe that you had to pay for, for a couple years. One of the classic example is what they call visual intelligence, which is actually just uploading a photo dog and getting a report on what dog greet IT possibly is. We're upload a photo of a plan which is nice to not have to have a separate APP.

But is this really game changing? Probably not. Similarly, they have a new photo editor where you can intelligently identify a person in the background of your nice cell phone. And one click to remove is IT helpful. yes. Is IT that much Better than all of the tools available or via adobe and other products to do IT in a more intense way? I would say probably not.

I think we both the probably most game changing and exciting features were actually around media search because everyone has hundreds, if not in our case, thousands or tens of thousands of photos and video saved on our phone. And I think IOS is slowly been getting smarter about trying to outlaw people or locations, but this is a really big step forward. So now in natural language, you can search either by a person, name or description, by a location, by an object. And IT searches not only photos, but also within videos.

Yes, our mom texted me earlier this week asking, do you have that video from a couple years ago, in main, of the seal popping out as we were kyacks? And I was able to use the new apple intelligence .

to search and find IT.

Yes, I was like, thirty seconds into a two minute via exactly video is like the ocean snored her text before I would have known I could never find IT. And this time I could surge like main C. O. And he told up the video right away.

Yeah, it's amazing. I mean, i'd like to joke how many gigabits terabytes Peterie of just food photos exist somewhere on all of our friends that will never see again. So sound like that was maybe the most useful feature .

that you yeah and IT also let you create things with that media in a new way. Everyone remembers those sometimes crave fy, sometimes charming memories, videos where IT tries to remind of what you are doing, you know, one day, two years ago, or some trip that you took. Now you can actually natural language input, like put together a movie of all my fun times at the beach, and IT does that.

So I think that something that apple is uniquely positioned to do, since there are the one that kind of owns all of that media, that was pretty exciting. The one may be disappointment for me, or the thing that feels yet to come is like a true A I native upgrade to theory. IT feels like, especially since the launch of, for example, ChatGPT voice, seri feels so antiquated, is kind of a voice camping yeah.

And they made some improvements to theory like she's Better able to maybe understand if you interpret yourself in a middle a question. But IT still is not action oriented. I would love to be able to say, like seri, call a uber to this address and have her do that. And apple feels like the only company that could actually make that happens since they have kind of permission.

And across all the apps, I mean, if you like syria, IT almost only comes up when you don't want syria, yes, to show up. But there are a few other updates, right? Notification summaries, maybe the kind of upgrades that you will only see on your phone because that's where you have that root access.

Yeah do you have any thoughts? Run you're talking about this maybe like next evolution of ai native software on these devices. Like when will we get that from apple? And maybe any thoughts around what that might look like?

IT doesn't like this release was a big stepford in a couple ways. They could have done things like the object removal or the Better photos for a long time, and they are just not done IT. And I think a lot of people felt like they're just choosing not to do IT.

They're just choosing to let the third party APP ecosystem do these sorts of things. But I think these releases show that they are serious about doing some of these things internally and having them be native to the I O. S.

ecosystem. I personally will be really curious to see if they do more around the original purpose of the phone, which is calls. Historically, there is very little you can do around a call, right? Like you make a call, maybe you can verge calls, maybe you can add someone, but like they have not wanted to touch IT much. And I think the new call recordings and transcripts feature is pretty crazy because historically, they've made IT impossible to do avoid recording or anything like that when you're making a call. And now they're actually enabling, hey, we're going to have this A I copilot that makes little noise of the beginning, that sits and listens to your call and eventually you could see them saying, hey, there was a takeaway from this car which was scheduled a meeting and like in your apple calendar, IT should be able to do that and send the .

invite to someone yeah. So now if you launch a call, you can press about them on the top left that has record IT does play like a little two to three second time that the other personal here that says this call is being recorded. But then once the call is completed, IT saves a transcribe down in your apple notes as well as some take ways I think the quality is okay, but not great.

I would imagine that improve IT over time. But again, there are so many people and so many apps now that have a lot of users and make a lot of money from things that seem small, like older people who have doctors appointments over the phone, and they need to record and transcribe the calls for their relatives, that actually does a shocking amount of volume. And so I think this new update is showing them maybe pushing towards some of those use cases and allowing consumers to do them more easily.

Yeah, maybe just round this out, this idea to both your points. There are so many third party developers who have historical created these apps. I mean, you mention the ability to detect a if you go on at magic or data 到 A, I like you can see there are pretty massive, yes, that that's their single use case.

But IT works. People need IT. What happens to those companies? What does IT signal in terms of apple's decision to capitalize on that and maybe less so, have this open ecosystem for their parties?

Yeah, I think IT kind of raises an interesting question about what is the point of a consumer product often like is is just a utility, in which case apple might be able to replace IT? Or does IT become a social product or community? Say there's uh standalone plant APP and then there's the apple plant identifier.

You might stay on the standard on plant APP if you have uploaded all of these photos of plans that you want to store there. And now you have friends around the country who like similar types of plants, and they uploaded and comment like IT becomes like a strove for plants type thing, which sounds ridiculous. But there is massive communities of these like vertical social network. And so I think there are still like huge opportunity for independent consumer developers. The question is just like how do you move beyond being a pure utility to keeping people on the product for other things that they can get from like an apple?

Yes, I agree. I think putting my investor head on this is not a case where i'm necessarily worried for startups. I think what apple should do this also is they are going to build probably the symbolist most utility oriented version of these features, and they're not gonna and extremely complex. Build out with lots of workflows and lots of customizations. So yes, they might kill some of these standalone apps that are being run as cash flow generating side projects, but I don't see them as much of a real risk to some of the for venture back companies that are maybe more ambitious in the product scope.

If we think about utility, right, one of the ways that you drive utility is through a Better model. So maybe we should talk about some the new models that came out this week. I'll give IT to open a eye first as of today, as we're recording, they released to their new o one models, which are focused on multistep reasoning instead of just answering directly.

In fact, I think the model even says, like I thought about this for thirty two seconds, the article they released said that the model performed similarly to P H. D. Students on chAllenging benchmark tasks and physics, chemistry and biology, and that IT excels in math encoding.

The ever said that a qualifying exam for the international mathematics olympiad GPT four, oh, so the previous model quickly solved only thirteen percent of problems, while the reasoning model that they just released scored in eighty three percent. So it's a huge different. And this is something actually like a lot of researchers have been talking about, right? This next step, I guess, opening thoughts on the model and maybe what IT signifies.

what you're seeing? Yeah, it's a huge evolution and it's been very hiked for a long time. So I think people are really excited to see IT come out and perform well. I think even beyond the like really complex like physics, biology, chemistry stuff, we could see the older models struggle even with basic reasoning. So we saw this with the whole like how many hours are in strawberry .

yeah anything that .

essentially comes from is these models are like next token predictors. And so they're not necessarily like thinking logically about oh, i'm predicting that this should be the answer.

But like if I actually think deeply about the next step, should I check that there are this many arts and strawberry, is there like another database I can surge? What would a human do to verify and validate whatever solution they came up with to a problem? And I think over the past, I don't know year you're in a half or so, researchers had found that you could do that decently well yourself through prompting.

Like when you ask a question saying think about this step by step and explain your reasoning. And the models would get two different answers on like basic questions then they would get if you just ask the question. And so I think it's really powerful to bring that into the models themselves. So they're like self reflective, instead requiring the user to know how to prompt like china, I thought reasoning.

I agree. I think it's really exciting actually for categories like consumer attack, where actually a huge and someones like a majority of ChatGPT usage by people with a dot E D U email address or have been using IT to generate essays.

But historically has been pretty limited to writing history, those kinds of things, because, as you said, these models are just famously bad at math and science and other subjects that require may be deeper and more complex reasoning. And so a lot of the products we've seen there, because the models are limited, have been to take a picture of my math homework and go find the answer online, which is, you know, fine and good. And a lot of those companies will make a lot of money. But I think we have an opportunity now to build deeper attack products that change how people actually learn because the models are able to correctly reason through the steps and explain them to the user.

And when you use IT today, you can see the steps thought about something. So by default, IT doesn't show you all the steps. But if you want or need to see the steps like for a learning process, you can get them.

I did test IT out right before this with the classic consulting question, how many golf balls can fit in the seven, forty seven and old one? The new model got a completely correct. In twenty three seconds, I tested on four o. The old model IT was off by two x, three x OK, and IT took longer to generate so very size, but promising really results.

There is important. I think I you tweet something about this recently angle or sound from this. A lot of people are up in arms saying the technologies is being used in classrooms.

And I think you had a really interesting take, which was like, okay ay, this is actually pushing us to forced teachers to educate in a way where you can use this technology. You have to think and develop reasoning. It's funny.

I D found a tiktok that was going viral that showed there's all these new chone extensions for students where you can attach IT to cavor whatever system you're using to take a taster, do homework and you just screen shot the question now directly and IT pulls up the answer and tells you it's A, B, C or d. And in some ways it's like, okay, cheating.

You really want to pay for your kid to go to college to be doing that but on the end of hand, before all of these models in these tools, most kids were still dist. Googling those questions and picking multiple choice, and you could argue a multiple choice question for a lot of subjects is probably not actually the best way to encourage learning. Or do you encourage the type of learning that's actually going to make them .

successful in life or to even assess true understanding? Like when someone is a multiple choice answer, you have no idea if they guess randomly, if they got to the right answer but had the wrong process and they were lucky or if they actually knew what they were doing.

yeah. And I think the calculator comparison has been made before in terms of A S impact on learning. But similar to the fact that now that we have calculators, IT took a while, IT took decades. But they teach kids math differently and maybe focus on different things. And they did clear when I was all by hand. I'm hoping and thinking the same will happen with A I or eventually the quality of learning is improved and maybe because it's easier to cheat on the things that are not as helpful for true understanding.

right? And I mean, if we think about this just came out today, is this a signal of what's to come for all of the other models, at least the large foundational models word? Do you see some sort of separation in the way different companies approach their models and think about how they think per say?

It's a great question. I think there we're starting to see a little bit of a divergence between general intelligence and emotional intelligence. And so if you're building a model that's generally intelligence and you maybe wanted to have the right answers to these complex problems, whether it's physical, math, logic, whatever, and I think folks like open a eye, anthropic or google are probably focused on having these strong general intelligence models. And so theyll all probably implement similar things and are likely doing so now.

And then there are sort of this newer burge with companies, I would say, that are saying I we don't actually want the model that's the best in the world that solving math problems or coating, we are building a consumer APP or we are building an enterprise customer support agent or whatever. And we want one that feels like talking to a human and is truly empathetic and can take on different personalities and is more emotional, really intelligent. So I think we're reaching this really interesting branching off point where you have probably most of the big labs focused on general intelligence and other companies focused on emotional intelligence. And the longer tale of those use cases.

so interesting, do we have benchMarks for that as there is obviously benchMarks for the how does they do on math? And because we're not quite at the perfection in terms of utility, that's what people are measuring. But have you got .

seen any sort of I have in so I feeling for certain communities of consumers using IT for like therapy or companionship or whatever. If you go on the subway, dict for those products or communities, you will find users that have created their own really primitive benchmark. Yes, of lik. I took these ten models and I asked them all of these questions and here's how I I scored. But I don't think there's been like emotional intelligence .

benchmark at scale a redit or might created yeah I not be surprised. Yes.

talking to this, definitely. If you building that. I think that also relates to the idea that these models ultimately in themselves aren't products there yeah embedded in products.

I think, Olivia, you shared a spotify daas yeah tweet about how that was just like a really great way for ren in combat because all of the income bits are trying to embed these models in some way. You said that there was a really great case study of how to do that. Well.

yeah, so spotify daily.

we both love.

I never share my spot of I rapped because basically it's just an embarrassment.

but that's my .

motion gentle, wistful thursday afternoon.

That's actually much Better than I could look. Yeah, I get a lot of I say that life yeah. So basically what spotify daily does is it's a new feature and spotify that kind of analysis all of your past listening behavior and IT creates a players based on the types of music emotionally jona wise, mood wise, that you typically listen to at that time.

And IT makes three a day, I think, again, for a day by default. Yeah, that's watches out every six .

or so hours. exactly. And the future was very successful. So spotify, you tweet recently, I think IT was something like seventy percent of users are returning week over week, which is a really, really good attention, especially since it's like not easy to get to with like .

you to go to the surge art.

Not right. Yes, it's really fun.

And I think why IT works so well and so many other incubation have just tried to attack on a generalist AI feature. But this one is great because that utilizes existing data that IT has on you, executed in a way that doesn't feel invasive but instead feels delightful all and it's not just like a fun one off novelty, but actually the recommendations are quite good. So you will end up listening to IT fairly often. And that's why I think people come back week over week as well as IT still has that novelty of, like I said, something crazy about me i'm gonna screen shot on my instagram and make sure my friends know that this is how i'm feeling right now yeah .

the daily lists have gone totally viral among jensie teams in particular. They're posting all over like tiktok and twitter, like the crazy words in their daylights. I think what spotify does is IT takes the data IT runs out to an allam, ask what's of a fun description of this playlist? But since it's not a human, that descriptions are often like norm core black cat panics thursday morning and you're like, what is this? Even me?

What do you like? Yes, like is .

right. But I am also confused in a way that will keep me coming back to see what the next day list. yes. And yeah, it's like inherently viral in a way that I ve only seen on rapped. Probably first qualify.

I would see another example of the good implementation of A I in a similarly both interesting but also viral, would be on twitter, rock their new AI chatbot. A lot of the read my tweets and rose my account or draft to tweet with this person's tone based on the similarly that's taking the existing data they have on you and creating something that's like fun and terrible and interesting and doesn't feel invasive because you are going and making the request veris IT like pushing something into your feed.

Yeah and maybe the takeaway, this idea that the best model doesn't necessarily equal the best product ah yeah I think you quote a nick sap, he said, remember dolly three, when I came out, everyone was talking about how the point coherence is so good. And then his point was how many people are still using this model anymore? And the answer, I think, is not many.

Yeah, I think there's a couple angles to that. So for dates in particular, it's not the most accurate. L M, the description of what your music taste is like. It's not what a human curator probably would do, but the weird is almost a feature instead of a bug yeah like this is sort of example of the emotional intelligence.

Where is the general intelligence which is like IT knows what the person wants but not like a dry oh you listen to like soft country on thursday morning. I think the other parties on the creative tour side we've seen, which is there is different styles for different people, but also like how do products fit in your workflow? How easier they to use are their social features, is their easy remixing. Like all of the things that make consumer products grow and retain historically can drive a worse model to do Better than a poor could Better model. yeah.

And I think IT differs across modalities. Again, spotify probably using an alarm to generate these, and it's not the most complicated alarm in the world, right? But it's good enough to generate interesting enough descriptions. I would say, for most tax models and even most image models, the gap between like great open source model or great publicly available model and like best in class private model, there is a gap. But it's not like a golf necessarily yeah versus in video and music in some of the other more complex modalities, there is still a pretty big golf between what the best companies have privately and like what is may be available to via open source or others. And so I think we've seen at least if the text and image trend continues, that will probably shrink over time across modalities. And what that means, again, is it's not does this team have the best researchers, especially for consumer products? But does this team have the best understand ding of the workflows, the structure of the output, the integrations, the consumer behavior and emotionality behind IT, that will allowed them to build the best product, even if it's not the best model, but the model is good enough to fulfill the use case?

Totally right. How important is IT for these companies that are using stuff that open source to find tune for their own news case? Like how important is IT for them to modify the model itself? First is just being clever with retention hacks or product design things like that.

I think that depends on the exact product and use case. I mean, we've seen cases where people go totally viral by taking a base flux or stable confusion or whatever and allowing people to upload ten images from IT and IT trains a law of you and makes really cool images. But the company didn't find in their own.

I got the atr upset of made tens of not in some cases, hundreds of millions of dollars. Like maybe there's a fine tune there, but it's probably pretty basic.

yes. But then I think so in the consumer side, usually the base models through prompting or designing the product experience around IT, you can get like pretty far, I think an enterprise were starting to see a little bit more and need to fine tune models around. I've talked to a bunch of companies, for example, doing product photography for furniture or like big items where you don't have to do a big photo shoot, you can just have A, I generate the images, and you might for that, want to find to the base model on a giant dataset of couches from every possible angle. So what IT gets an understanding of, like how do you generate the backend t when you only have the side or the front shot .

of the couch ah because the bar is just so much higher there in times of using output of the sumer product where in so many cases the random is part of the fun.

Well, on the note of models to talk about one more, adobe released their firefly video model. Firefly was released in march twenty twenty three, but that was text to image. And so now they're releasing this video model.

They released this neck peak on wednesday. They also said, interestingly enough, since march twenty twenty three, the community has now generated twelve billion, what they say, images and vectors, which is a ton. And now there again, they're moving toward video. And they released a few generations that were all created in under two minutes each. Yeah thoughts.

A O is a super interesting case because how they describe their models as they only train on the safest, responsibly license data and they don't train on user data. And so I think historically, they've been a little bit sort of hamstring in terms of just the peer text to image or probably textile ity equality because you really limit the data that compared to all the other players in the space, the outputs typically aren't as high quality.

I will say where they've done super well is like how do you bring A I into products that people are already using? I don't know if this counted in the firefly numbers of generations, I guess IT was. But they've gotten really good at. Like within photoshop, you can now do generated expand where you got a photo that was portrait and wanted IT to be landscape, whatever. And you can just drop the photo in, hit the crop button, drag out the sides, and then firefly generates everything that should have been around the original image and kind like feeling in the blanks.

And I also seen, even like viral tiktok trains of someone upon a photo of themselves standing somewhere and then using generated field, the kind of zoom out and see whatever the A I thinks they were standing on yeah.

Which I think is like reflective of the fact that adobe, for the first time has made that available activity theyve ban debase pretty heavy in a positive way like complex products, yes, but with the eye, you've now put firefly on the web for free. They have a mobile APP in adobe express like they're really going after consumers in a way that I think we haven't seen them do before. I will say, like reading the blog post for the new video model, IT did seem very focus on professional video creators and how to embed into their workflows.

Like, okay, you have one shot. What's the natural next shot that comes out of that? And how do we help you generate that S A R sumer video r.

which makes sense, I think, because what is really resonated with them in image is, I would say general fill and generative expand, which is sort of taking an existing asset. And yeah saying, you know if this was bigger, what would be included or I want to raise this thing, which they really shine and honestly, like I still use those features .

all the time yeah yeah. I announced in the past that they're also gonna be bringing some other models, video models into their products like I think sora and peak a others. And so I don't at least see this as their attempt to like be the dominant killer all in one video model, but maybe starting to integrate with some of their own tech.

They have a really interesting opportunity because they have so many users. Yes, i've saying, okay, if we just want to have the best possible A I creative copilot is our model that trained on this a very limited data like truly the vast model for making that image video? Or should we give users a choice between our model and these like four other models that we will be basically offer for with in our ecosystem, which I think if they do go that latter around, which sort of signal they will is a really interesting distribution opportunity for startups, because most startups s have no way of reaching the hundreds of millions of consumers at ones that are using adobe products.

That's a great point. So I didn't even realize this, but they've said that they likely want to bring in these other models and they can be the models that says there first and make sure that they're only using certain rights, but then they can integrate these other models and maintain their dominance with how many people have adobe descriptions?

Yes, exactly. I think we've talked about that extensively for video. I think they reiterated that with maybe pick a specifically with the most recent released, but before they had talked about kind of store and other videos as well.

they're pretty interesting. I think even for years, they've allowed outside companies and developers to build logins on top of the adobe sweet. And some of them seem like things that adobe itself would want to build like, for example, await a kind of products, has your preset editing settings and allow anyone else to use those you might think that adobe could do. But if I were them, I would be thinking, hey, actually, we may not build the A I native version of adobe ourselves, but we will become sticker as a product if we let others build those AI native tools and make them usable and adobe versus sending those people that build their own products and pull users away from the adobe suite. I think we still feel like there will be one, if not several yeah stand alone A I native adobe that come out of this era but yeah I will see what's see .

your point IT does feel like the model that was shown in their article was more based on, like you said, people who come with existing content yeah you can up level IT or chop IT up in some unique way, but not too much, as you said, A I native, let's start from nothing. Let's art your text prompt or something like that. Well, this has been great. So i've has been moving so quickly, so we will have to do again. And there models more and box nounce.

awesome. Thank you for having that, of course.

Thank you. All right, you heard IT here. If you'd like to see these kind of timely episodes continue, you've got to let us know, put in your vote by sharing this episode or leaving a review at red, this podcast.

Com flash a six sixty. You can also always reduce with future episode suggestions at pod pitches at a six sixty doc. Thanks so much for listening, and we will see next time.