Hello my friends and welcome back to the future of the eggs podcast. This podcast episode is recorded in Sardinia in Italy on this beautiful island. I am currently on vacation here or workation how you call it nowadays. So I am basically working during the day and then exploring a little bit later in the afternoon and evening.
It's super nice here and I'm recording from our van because we are exploring Sardinia with our new build van and yeah I'm just sitting here right next to the beach. It's pretty hot here in this van so I need to open the window a little bit so if you hear any beach waves, any birds, I hope this is not a too big distraction for you because I really wanted to
record this podcast episode to talk about the AI strategies for companies. I think this is super important, especially for us as UX designers to understand the strategy, get an overview of what's on the horizon, what's about to come, talking a little bit about what's coming with the developer conference in just a few weeks for Apple, but also what's
some of the tools that have just been released like the RabbitR1 or the AI pin. So I would say we will get started with Apple, right? Like here's the scoop on Apple and AI right now. Apple is generally known for creating amazing user experiences. But when it comes to AI,
Yeah, they are kind of the underdog compared to Google with Google Gemini or Microsoft OpenAI. And despite that, Apple is betting on its massive user base to catch up. And some people have already guessed it and there have been some leaks. So those are leaks. We will talk about some predictions. Those are not facts.
But at their upcoming Worldwide Developer Conference, Apple is probably going to roll out new AI features in iOS 18 and Mac OS 15.
They are calling this initiative project "Grey Matter". And the cool thing is: These AI tools will be built into apps we use every day like Safari, Photos and Notes. A lot of people have already guessed that. Nebel is focusing on a lot of practical tools that make our lives easier. So like better notifications, transcribing voice memos and smarter photo editing. Very simple things that are a little bit more transparent that are happening in the background basically.
So on the other hand we have tools like the AI pin or the Rabbit R1 and I mean there have been a lot of problems and disappointments coming with these tools. I personally had really high hopes and I think it's still very revolutionary what they have done with these products but people got really disappointing. Just a little
summary of what happened. We have two super inspiring AI tools that were brought to the market last month basically, both last months. And let's talk first about the AI pin, which is this mini device that you pin on your cloth. You interact via gesture, via voice, via tapping and the interface is basically projected on the palm of your hand.
You don't have any apps, you only have a subscription that you pay,
and of course the price to buy this device. And then you interact with it very naturally. It all sounds super revolutionary, but it came with a lot of problems. Heating issues, performance was really laggy, really slow, the magnetic mechanism was a little bit problematic, the projector functionality which sounds so interesting, weren't tested properly basically. It's innovative but underwhelming. So the idea of projecting
Information basically on your hand sounds awesome but in practice it doesn't work so well because you have different sizes of hands, you have different angles how you use your hand. It's not really practical, right? Like humans are not used to hold their hands just like in front of their face basically. So it's a very unnatural positioning.
Also outdoor visibility, the size limitations and general usability problems like having to enter passwords all the time is a pain when you want to remove it from your jacket to your pullover for example and then to your jacket again. Privacy concerns, flexibility in wearing and yeah, users just didn't really loved it but they felt it's awkward.
And now they're actually trying to sell the whole company because it hasn't been a success. Technology is still super innovative and interesting, but the problem that I'm seeing is that there has been a lack of user research, really talking to clients, observing users and user behavior. And user research is just such an important part of any design process. And I think this is a good example of a failure of user research.
The second super interesting AI device that we are talking about is the Rabbit R1. This Gameboy-like device in bright orange with a display. So I think this is a little bit more promising. It's cheaper, it's 199 euros. By the way, how it has been shipped, it's definitely undercooked. It's missing a lot of promised features. It's been created by Teenage Engineering.
And it's giving a lot of learning capabilities, but also problems with user interactions. Despite having a touchscreen, you still need to use a scroll wheel for some things. General problems like volume adjustment. It just weren't ready to go on the market. It's not quite ready for everyday users yet, so people were really disappointed.
Also innovative but also rushed. Super exciting gadget but as I mentioned it seemed very rushed. So people are really disappointed about these two technologies, basically about these two devices. Also from a user experience perspective.
But what I think is that they really rushed into that because they knew that other companies like Google, Microsoft or Apple, they're not sleeping. They're also working on their own AI features and AI integrations and especially thinking that you have...
Rabbit R1 functionalities in your iPhone, which would be so much more in your Android phone, your Google phone, it would be so much more convenient for users, right? Because they already have their phone. They can use their phone as they can basically default mode, but they could also use more like the AI future approach, right? Or what's Apple currently doing is integrating it very smartly in their devices. We will see what they're going to talk about at the developer conference.
But some of the new features that might be coming that have been leaked are for example a little bit of an enhanced Siri. So Siri of course is getting an upgrade to sound more natural and be more helpful using Apple's latest AI models. We will talk a little bit about what's the problem with Siri currently in a second.
Then they were probably going to integrate generative AI for emojis. So like custom emojis basically created on the spot based on your texts. Sounds interesting. Then a free iPhone home screen where you can change the colors of the apps, where you can place them wherever you want. Something like that. And also smart recaps really get summaries of missed notifications, messages, web pages and more, which sounds super, super handy.
So there's still a problem with Ziri, right? If you use ChatGPT or any other large language model, you might notice that there is a huge gap between Ziri and ChatGPT, especially with the ChatGPT 4.0 model, right? Like you can't really compare that. And Apple is not stupid. Of course, they know it. But obviously, they seem to have problems with Ziri at the moment in order to improve Ziri or make it more or less...
or integrate similar functionalities than ChatGPT for example. So sooner or later they plan to integrate a ChatGPT-like chat interface or ChatGPT alternative basically. But Apple isn't there yet and it seems to be taking quite a while and competitors making progress and Apple is not quite keeping up.
And you also probably might wonder why does it take Apple so long to improve their Siri? Why is Siri still lacking? I think the possible reason is: While OpenAI has really scraped the entire internet, including books, articles, millions of copyrighted material, Apple is using more or less ethical guidelines.
which makes it more challenging and time-consuming to come up with content that it can use to train the model. It's similar to what we see with Midjourney for example and Adobe Firefly. Both are image generation tools. We use prompts as an input and then generate images. Midjourney is slightly ahead of Adobe Firefly, but it was trained using copyrighted images from designers, from artists, from photographs and
there hasn't been a lot of filtering basically. So you can create a lot of different images that are actually copyrighted, right? Like with Mickey Mouse for example, which you don't have the right to use for example or other companies. Adobe however approached things very very differently by using stock photos for training and compensating the artists and photographers.
And I think this is like a very, very different approach. And this method is naturally, of course, more expensive because you need to pay people and it takes a little bit longer because you need to find a lot of images, which makes it ethical and the right way to train these models, right? Like how also we as humans want to use these tools, how we want to interact with AI. But of course, it takes a little bit longer.
And Adobe Firefly has now released Model 3. It has improved so, so, so, so much. I think it's impressive. The results are great. But the process took a little bit longer than with tools where you can just like basically steal everything online and then put it in this black box. Of course, that's faster, right? So it takes a little bit longer initially, but in the long run, it's worth it. And I assume that Apple has a similar problem at the moment.
So Apple is probably looking for alternatives and there has been some discussions around partnering with OpenAI to integrate their chatbot into iOS 18 because Apple hasn't has realized that their own chatbot isn't ready yet. So they might team up with OpenAI for an advanced solution.
But they're also talking to Google about possibly using their Gemini chatbot. And it's a big move that shows Apple is very serious about getting the best AI tech out there. But I think also something that we should be very skeptical about, especially with the things that are happening with OpenAI. Maybe some of you have heard about the drama with OpenAI. There was a bit of a... I mean, drama sounds too negative.
because it hasn't been too small but a little bit of an issue with Scarlett Johansson the actor. OpenAI released the new model GPT-4-0 like a few weeks ago and by the way I made a youtube video about this new model where I showcase all its features so make sure to check out the video see all the new features and also see some examples of how you can use this new model.
You can find the link in the description box. So here's what happened. OpenAI asked Scarlett Johansson if they could use her voice for the new model. Why Scarlett Johansson? Pretty easy, because it was her voice that was used for the computer "Her" or in the movie "Her". So most of you probably know the movie.
where a man falls in love with a computer voice. And Scarlett Johansson however declined the request, she said she doesn't want to do it. But what's interesting is that the new voice called Skye still sounds exactly like Scarlett Johansson. So if some of you have noticed that the new voice sounds like her, like the movie her, like the computer,
That's why. It really does sound like Scarlett Johansson. And she has been filed a lawsuit now and is taking legal action. As outsiders, of course, we don't have all the facts, but it's clear that the voice sounds just like hers. The voice is currently offline, so you can't use it anymore. But it's very likely her voice that was used without her consent, probably.
And this casts a very, very bad light on OpenAI and shows what they're capable of. And let's not forget the personal issues or the personnel issues not too long ago. They also caused a lot of discussions.
And I think this really highlights the risk of also partnering with companies like OpenAI. Apple, other companies really have to be super careful about these partnerships and how they impact their brand and user trust. On the other hand, it's super difficult to create their own AI models that are comparable with what TechGPT can do at the moment.
So now let's come to a little conclusion. What does it all mean for user experience? This is what's interesting for us. We are user experience designers, UX designers working in innovation and design. So what does it mean for us? First of all, I think it's a reminder that cutting edge technology is exciting. And I think for us, it's super interesting to look out for these tools like the AI pin or the Rabbit R1.
Because it doesn't always mean that these technologies will become super successful and will really be innovators and revolutionize the market. But that a lot of these functionalities will probably be integrated in our day-to-day tools. And it also shows that the real challenge is really making sure that we meet user needs and expectations.
Bringing half-baked tools to the market is a big problem. So integrating it step by step is super helpful, also for users to help them adjust. And Apple's AI strategy really shows that it's important to be very thoughtful, user-centered, but that also these companies struggle integrating AI on a bigger scale. I'm super excited for the World Developer Conference in just a few weeks.
and can't wait to see the updates this is such a good reminder that we as your ex designers need to stay up to date with ai understand what's going on and be part of that revolution to understand how we design tools i will be integrated in basically every tool you can think of so we need to learn how to do that and if you want to learn more about ai integration ai patterns how to integrate ai
Feel free to sign up to my free UX newsletter where I'm sharing insights, tips about UX patterns, about AI rules, about resources. Each week, sign up and you get it right to your inbox. Thank you so much for listening and I hope to hear you next week again. So, hear you in the future. Thank you so much and bye-bye.