Hello and welcome back to the Future of UX podcast. I'm Patricia Reines, your host for this podcast. And in this episode, I have an amazing guest with me. His name is Greg Noodleman. He is a UX designer, a strategist, a speaker and an author. And for the last 20 years, he has been helping his Fortune 100 clients like Cisco, IBM, Intuit,
and more to create loyal customers and generate 100s of millions in additional validation. And currently he's working at Sumo Logic, creating innovative AI and machine learning solutions for security, network and cloud monitoring. He has also been working on 32 AI projects.
He had over 117 keynotes and workshops in 18 countries and authored 5 UX books and his 6th book "UX for AI" is planned for release in 2024.
I am following Greg for quite some time on LinkedIn and I really love his newsletters and the articles that he's sharing. Always very, very excitable and really on point. So I'm super excited about this conversation. We're going to talk a little bit about the UX design process and see how the process might change using AI or working and designing for AI products.
We are also going to talk a lot about tips and tricks for UX designers who want to focus on AI. So enjoy this amazing episode with Greg. So welcome, Greg. I'm so happy to have you in this podcast episode. Welcome. Thank you. Yeah, it's great to be here. Much obliged. I'm so happy that you're taking the time. I know you're very busy, so I really appreciate that you're here now with us.
And I would love you to do a quick intro. Tell the listeners a little bit about your background, who you are, and how you got to where you are at the moment. Sure. I started out, I guess I started out as your normal chemist in a chemistry lab. And that's where I got kind of a taste for research and really trying to understand our world. And also the high-tech aspects of it, just really...
How are we making an impact? And then I moved into software development and have been a developer for a while, full stack, all the way down to being a DBA. And then I realized that where I am making the most impact and what appeals to me the most is really understanding how, not building systems, but really understanding and improving how customers and people in general interact with systems. And that led me to
Norman and Nielsen's books and the rest, I would say, is all UX from that point, like UX design, UX research. So I've been extraordinarily lucky in my career to have been on all these different aspects of our profession. And if I can share anything that I've learned, I'm very, very happy to do that at this stage of my life.
Super fascinating. I think it's always very interesting to hear the different backgrounds, how people really got into UX and how these backgrounds are helping them right now, right? Like the science approach that you're using at the moment, especially. And you are not just doing UX, but also thinking a little about the future, especially when it comes to AI and those new technologies. So tell us a little bit about that.
what are you doing about AI and UX? How are you combining this? How does it look like in your day-to-day work at the moment? Sure. Yeah, I've been writing and working in the field of designing AI-driven systems for over a decade now. I'd say I've had my share of kicks in the face with 32 design projects. And I've...
I've kind of seen what not to do a lot of times. So the talk that I'm touring with now is eight ways to screw up your AI project. And it's kind of like all the ways that things can go wrong. But it's interesting because a lot of the ways that things do go wrong in the AI projects are actually UX related, not technology related. So it's not, we think about AI as mostly kind of a tech platform, but it's actually the
a user experience aspects and all the things that we do normally as UX designers, which is understanding the right use case.
understanding the outcome, the outcomes, all that kind of stuff that really rolls into AI success. So now that I'm here, I am trying to really popularize this message, if you will, and really trying to get more of our community into working with AI as will be a tool and really be involved in designing AI-driven systems and AI-first systems. Because without UX...
It's just that much more likely to fail and also that much more likely to make a negative net negative impact, I think, on the world and our society. And so it's a matter of absolute urgency that we as UX designers and product managers, we get involved because AI is just too important to be left to data scientists anymore.
And so to do that, I've started a website called UX4AI.com. So very, very excited to talk about those things as I've written a few books and I have a book coming up on specifically on UX4AI. That's going to be my sixth book on UX design and very excited to bring that out. It's coming in just a couple of months. So.
Exciting. Very excited to do that. But let's not jinx it. I will add all the links. Still working on it. I will add all the links in the description box. So as soon as the book is ready, people can find it in the description box. Because also listeners sometimes listen to the episode a few months later down the line. I feel the topic will stay very relevant for this, for next year.
And you mentioned something super interesting, right? That there is some kind of like an urgency for UX designers to get into AI. So I'm curious, what, how? You know, what are the steps for a designer who's interested in AI to learn about AI and really get into the field? Absolutely. I think it's a... Well, so at first what I started doing
Interestingly enough, is I started kind of explaining how a lot of these AI driven systems work and how folks can use their skills to engage. But I think what I'm realizing is that people are looking for immediate help with their own projects. And so what better way to really understand the platform and understand the capabilities of AI than actually use it in your own daily work for your own UX projects?
And so that's really what I would recommend is yes, go ahead and open Chad GPT and use it to create a template for your, for your report or use it to kind of polish up or make it, make your report, for instance, more compact, things like that. It's,
So any kind of templating is fantastic. For instance, if you want to do a usability study and you're trying to write a proposal, well, you can feed it a couple of proposals you've written before and you can say how your new project is different and it will give you a proposal that is more customized to your style and the way that you like to write.
So that would be an excellent way of doing it. Of course, anytime you use it, you want to see how it also hallucinates and gets things wrong. So you absolutely want to double check everything that AI produces. Think of it as a bit like an intern that is somewhat unruly and doesn't really know his or her way around, but at the same time, very, very capable of doing a lot of research and a lot of iterations.
very, very cheaply and very, very fast. So, you know, I guess that would be the first way to try it. But also don't forget to actually go to AWS and Azure Studio and just fire up a GPT of your own, feed it some data,
feed it some articles you like, and then see if you can ask it questions about those articles. That is called fine-tuning or prompt engineering. And you can figure out what it does there. Also go to Midjourney and see what kind of images that you can get out of AI today. All of my column images are coming out of Midjourney. I'm very careful to say what I used as a query and what kind of
uh prompting that that that uh led to it but it's it's sort of up to you how um how you use this but definitely do go ahead and try it it's it's a very very exciting time for all of us to really the first step is understanding yeah and that's that's the way that we can all engage engage in this and then from there you can build actual interfaces
for this technology and hopefully succeed in your next AI project. This is the next super interesting topic. Now we talked about how to use AI basically as a tool or as a little collaborator or as an intern to help you throughout the design process. But what about really designing for AI products? As you mentioned in the beginning, that there is some kind of urgency. There are already a lot of AI products out there. UX is not that great.
We need to step in there and help also the data engineers to create great products, right? Like with the user-centered approach. What do we need to know as the designers when we want to do that? Well, I would say attend one of my workshops. Give us a sneak peek into the workshop, like a little bit of what to expect. Definitely. Absolutely. So I would say...
The tools that you already have been using, hopefully for the last 10, 15, 20 years, however long you've been around, those tools are going to also be very useful for AI-driven projects. There's just a bit of a tweak that you need to add. So in the process itself, for instance, if you think of a right process, a rapid iterative testing and evaluation, which is probably my favorite way to approach typical design project
You know, we don't have a lot of time up front to do the research. We just kind of jump into brainstorming and then go to customers and start iterating on our design. Well, that's all well and good. And definitely do that. But think of making it even more lightweight approach so that you, rather than spending weeks and weeks contemplating,
kind of coming up with different things, go in, do a little bit more of a raw, uh, raw analysis, uh, a little bit earlier. Um, don't be afraid of approaching things and say, well, imagine like, this is what you think you're, you know, this is what you're doing. Let me give you one example. Um,
I was consulting for a water irrigation company and one of our use cases that the company has really had in mind is kind of watering insurance. So you would go to these farmers, a lot of them are family farmers, have been for many generations and you would say, well,
here's this AI that's going to tell you if your crops are sufficiently watered. And by the way, you have to put all these sensors in and, you know, to dig down into the ground and to see, you know, there's water in the roots and all that good stuff. Well, they just left us out of town. They were just not even amused that we were offering that because they have a time-tested
way that they know that the crops are sufficiently watered. It's called a kick test. They would go out, literally go out in the field and kick their heel into the earth. And if the heel sunk a certain way that they learned from their forefathers, then the crops were sufficiently watered. If not, that they would order them with some more. So
Does that mean we gave up? Not at all. Instead, we asked them a question saying, what is, that would make a poor story, wouldn't it? Yeah. So instead we asked them like, what does keep you up at night? Like obviously crop watering insurance is not one of them, but what does keep you up at night? Well, they said, well, there's new water regulations and with the global warming and the climate change,
We don't have as much clean water, fresh water as we used to. So they're very tight regulations. There's also a lot more farming. So we can't all withdraw water from the same aqueduct, right? So we have to balance our water usage with how much land we have and so forth. And all that is becoming very regulated. So if we could have a way of doing that, that would really be helpful. So meet the regulations, minimize the water usage,
rather than maximize kind of crop yield and crop insurance. So it's almost the same use case, but flipped on its head. So rather than sufficiently watering, you're saying how little can I water and still ensure my crops are alive and yield a nominal amount, right? So balancing the money, right? So that's where...
that's where they understand the value, right? It's really affecting their bottom line directly. So that's the kind of AI system you can sell. Now, if we were to spend months and months sitting around water coolers drinking lattes, no matter how smart we are, we would not have come up with that use case. But simply drawing it out on paper, and I brought a couple of examples, I brought an example here, I guess, of a use case
So here's one that I just worked on for a recent workshop. So this is just an example of a quick storyboard. So a bit of a show and tell that
You can see it's very, very simple. So you essentially say, well, here's our use case. Here's graphically what that looks like. Let's all agree that's one. If you have other use cases, go ahead and put them down as well. So it doesn't take a lot of time to draw these out. And I teach you in the workshop exactly how to do that. And then you can go to customers and ask them these questions about it. You can walk them through it and say, does this actually make sense to you? Would you do this? Does this keep you up at night?
And if you hear stuff like, well, for me, it's okay. Or I'm not super interested, but my neighbor arm over there, you should talk to him. Like, that's not what you want, right? What you want is, this is awesome. When can I have it?
Those are the magic words you want to hear. And when you hear that, that's when you know you got something valuable. And then, again, going sort of in your design, going mobile first, going very, very simple, again, with something like Sticky Notes. So this is now interface design for that particular project, which is Earth Clock. So it's kind of to see how much time the Earth has. It was just a very...
a very esoteric kind of use case where we're trying to figure out how we can minimize the the the impact on uh on the earth in the environment by watching what you eat so taking a photo of your plate and uh you can see if if that's actually helping you or not so this is where you know you're you're kind of taking a photo of your food and you're saying yeah but no this is this is
going to make me stronger and at the same time give the earth a bit more time. So I'm not saying it's the end-all be-all, but again, sort of make it simpler because a lot of folks can't imagine the interface. And then the interface with AI is often sort of just say it's magic, right? Just say, I'll wave the magic wand and this happens.
Is this going to be of use to you? So getting the use case right is one of the number one ways that you can contribute to your project.
immediately. And again, these are the tools you already have. Just really go early, go quick, and go lean, and start the research as soon as you can. Don't wait for the whole thing to be completely ready. Because so many of us just wait for product managers to bring us the full scope and purpose of the project, and then we start designing these features. That's not
that's not the future. If that's kind of what you do and you're going to continue doing that, you're going to be out of the job in months if you're not already. It's just not adding that much value. Whereas on the other hand, if you can be strategic, if you can understand what AI does, the capabilities are, and then really understand your customers and their use cases, their pain points, and how your company makes money,
addressing those pain points, that is the magic combination that you can use to propel your career forward and also just really make an impact in the industry. Super interesting. And I feel it's some kind of an unpopular opinion that you don't go step by step through the whole design process. I'm having those discussions with a lot of designers and I think for me, I almost can't believe it that there are still people out there who think
Every project needs to start with research, of course, right? And then we need to define and ideate. No, this is not how the reality looks like. And I think it's interesting what you mentioned, right? That this is even become more lean and you as a designer need to understand what to do and when. Sometimes you need to do research first. Sometimes maybe not, right? Like you start with a prototype. Yes.
And then you iterate from that point on. And I think this is super important also for designers to understand, right? Because also what you learn in design schools is like go from, you know, like go through the design process, basically those five steps. And this is actually not the case and also won't be the future, right? Yeah. Absolutely. I think one of the key things
And things I've heard early in my career, fortunately, was Jared Spool talking about the UX dogma. And he said, let's just say, you know, in the UX dogma, every time you want to cook, you start by making a roux. And roux is this very kind of slow, I don't know if you're familiar or your listeners are, but it's essentially, it's some kind of fat and let's say butter and some flour. And then you...
very slowly and meticulously mix the flour into a hot fat. And then over time, it becomes root, which is important for, say, New Orleans cooking or gumbo, if you're making gumbo and jambalaya, that kind of thing. A lot of French cooking uses it. But it's very tedious. It takes a good 30 minutes of constant mixing and throwing a little bit of flour at a time.
I'm a bit of an amateur chef, so I know a little bit of this, but it's very tedious. But let's say every time you sit down to make something, you start by making a roux. Well, that's just silly. I mean, if you're baking a cake, you don't need a roux. You can just go bake cookies or brownies. Like, you don't need roux for that.
You only need it when you need it. So don't invest in a Macon Rue if you don't need it. For example, that's one of the things I have honestly about these reports because the, yes, if you're a junior designer, if you're a junior researcher,
and this is one of your first dozen projects, and you've never done this before, by all means, find the right template, create a proposal, right? That kind of thing. If you have a client, again, same thing, create a proposal for your research. This is how much it's going to cost. This is how many hours and how many people we're going to talk to. These are the kind of profiles that we feel. Well, but most projects, they don't need a formal proposal. They
that you spend two days crafting. That's just silly. It's just a normal flow of work where you already have your customers identified and you probably already have some that you're talking to
Don't you? I hope. I hope. I hope you do. If you don't, then you're in trouble already. Right. But it's kind of, you don't need a formal proposal that no one's going to read. It's just going to go into a filing, electronic filing cabinet. And most proposals, ChatGPT can help you write it in five minutes and then ChatGPT would then consume it. Like, what is the point really of making these proposals?
electronically produced documents. If it helps you ground yourself, if you're very junior, then by all means, if it's something new, again, by all means, but if it's not, skip it, go straight to, here's my idea. Here's a storyboard. Let's have a kickoff with PM, with developer, with data science person, UX, let's all agree. This is what we're doing.
And let's go ahead and sketch these out because that storyboard is going to be a lot more powerful because it's going to bring all these elements together in a much more tactile, concrete way that Chad GPT would not be able to produce necessarily. But that is a product of your brainstorm. It's a story that you tell. But it takes only about 10 minutes.
And then you already have an artifact that you can go in to customers with and talk to them, like use the artifact for conversation. You can't do anything with the research proposal.
they're not going to be reading it, right? You're just kind of stating your assumptions and so forth. Like sometimes it's useful, but it's like Roo. You don't always need it. Paperwork. It's a paperwork reduction act. Think of this. AI is your paperwork reduction act. Anything, any paperwork that AI can produce, you probably should not be producing. You should be going past that already. Yeah. That's a good way to think about it. So much sense. Yeah.
And I think also clients or stakeholders really love if you're proactive and also recommending what are next steps rather than like all the, I mean, planning is great, right? But like giving also people in the team, the feeling of I have things under control. I as the designers know what to do. I have a method. I have everything under control is so helpful, especially at the moment, right? Where people are not so sure how to handle this with new technologies, AI, UX, is it important? Yeah.
What are your thoughts about UX designers changing their workflows? And of course, there are also some layoffs at the moment. So how do you think the UX industry will change with AI? Yeah, I think it's just the beginning. It's going to be a complete turmoil of what we're just getting glimpses of. What we're doing today is going to be so different
even a year or two from now. It's going to completely turn on its head. Now, first of all, the daily write process. We already mentioned the Paperwork Reduction Act. Think of it this way. So if you're doing busy work, just stop. That's not going to help anybody. But in addition to, you know, let's say we've always kind of considered
And we've been trained to do this by easy tech, for instance, cloud technology. We just say, well, it doesn't matter how things get deployed. It doesn't matter how things are getting coded. It's just going to run somehow. It's in the cloud, so don't worry about it, right? Same thing with UI. A lot of the common design patterns that have already been encoded with React for years and years, again, we don't need to worry about it. Like a table or a search, right?
sort of typical faceted search, table sorted by columns, typical form, that kind of thing. You don't need to debate this thing for a very long time. What you need to do is get to the heart of it and really understand now, how does this work with the AI model, with the data that you have? So
Yes, you still don't need to necessarily worry about implementation in the actual pages or what it does at the back end as much as we used to maybe 10 years ago. But now this, think of AI as like another stakeholder. And with AI comes data. So understanding what data do you have, what sort of legal requirements of the data, where's the bias in the data, what is missing in the data, how dirty or clean the data is.
And then how is that data going to train your, your AI model and then how that is going to affect the user experience. So it's,
Rather than just kind of UX prototyping and talking to customers, just imagine AI is like another cog in the wheel. I have an example out that's actually my latest article is specifically on this type of workflow. And it's got a lot of Q&A. So if you want, go to UX4AI.com. It kind of explains what I'm talking about here.
It's hard to do a visual with your hands. But definitely the AI model and the data are kind of an additional cog that you need to revolve around. So that's number one. So your day-to-day kind of process, your rapid iterative testing and evaluation now includes this other component that wasn't there before. So that's number one. So that's just ideation and
day-to-day design and sketching, that kind of thing. So it used to be three in a box where we had UX, PM, and DevLeap. Well, now it's four in the box. We're adding data scientists. So that's kind of the reflection of that. So it's from three in a box to four in the box. So that's just a daily, day-to-day things. Now let's talk about how we actually go into...
mock things up. So a lot of folks spend a lot of time in Figma and they've become Figma just masters. And they mastered the, the auto layout. They mastered all of these beautiful things. The, the, the, the inner, the inner border, the outer border, the overflow background. Well, that's all fantastic. But let me tell you in months, in months, this is going to all happen overnight.
with AI driven stuff. And not only is it going to happen that way, but it's going to go straight to code. There's not going to be a picture in between. It's going to go straight to code. So from your drawing or from the components in the design system, straight to React UI. And then on top of it, the AI is going to create the content for it.
So if you want to kind of think of a mock-up as literally a working system with the data and almost the full stack that's already there, that is all written with automated tools based on system components that, again, are created and maintained with a very heavy AI use. So you can essentially go from an idea to a working system in hours, right?
And that's really where we are. So instead of having a single system that is outputted, because that's going to be too easy and simplistic, it's going to be a lot more adoptive. Let me give an example. So in the old days, we used to talk about things like, well, here's a wizard, for instance, for newbies. And then...
It's really tedious to fill out five pages for somebody who's done this before. So we're going to have, let's say, an accelerator type of form. So if you click on advance, there's an advanced form with not a lot of help that just kind of takes the essentials. Or even more advanced, we're going to have kind of bulk upload.
So let's say you have an Excel file, a PDF file you can just upload, and that kind of solves a lot of the problems. If you're a beginner, you want to do it one record at a time. You want to do your CRUD, create, read, update, delete, write. You want to do all that kind of step by step. But if you're an avid user of the system, then you want this advanced functionality. Well,
These are just two polarities. And often we didn't have the budget to build two, so we would just build one. Maybe we start with the advanced and then kind of add a lot of help for the basic and then call it a day because then we run out of money. Well, with this new paradigm, you actually can create extremely customized AIs that
I wouldn't say in real time, but in almost real time, because there's not a lot of additional cost to AI doing these iterations. And then AI doing the QA testing and AI doing all of the validation for the code, make sure all the code works. So you can literally have a highly individualized, not just customized, these are individualized AI.
created systems that understand your individual preferences, maybe from your own virtual assistant. Let's say you're getting older and you like larger text, or you like more contrast, or you are not as familiar so you want to take your time and really validate it. Well, that's the UI that your AI is going to render
for you. Let's say you have a certain disability and you really need to use screen reader. Well, let's go ahead and make it optimized actually for the screen reader so that our accessibility will become reality.
And there's a match maligned Jacob Nielsen article out there about this, where I think people just kind of really misinterpreted that and created a lot of controversy, I think, unneeded. Because Jacob Nielsen has been the advocate for, one of the original advocates for accessibility, you know, many, many, many years ago. And I've been a fan of his and follower of his for decades.
for over 20 years. But the point I'm trying to make is he's recognizing that a lot of companies just don't have the money or didn't have the money in the past decade or two to create accessible UIs. Well, I think that's about to change because if the company has a good portion of their customers who are...
have these special needs or are disabled, they are going to be able to use AI to produce interfaces, not just that are accessible in the general terms. And you come to, you know, you have one UI, for instance, you go to apple.com, you and I see the same UI. Well, that's not going to be the case anymore. It's going to be totally personalized to you and totally personalized to me, not just the choice of product,
that is going to be determined by your purchasing history. That's not even what I'm talking about. It's beyond that. The entire thing is going to be individualized. I might have a larger text than you. I might have higher contrast. Or if I can't see at all, it's going to be optimized for a screen reader right away. So right a priori. So you wouldn't even be able to use the interface that I use potentially.
We can evolve along different parallel lines. You're going to have an interface that is specifically customized for a certain type of disability that is going to make this a much better experience from the ground up. And this is all with, it's all going to be possible with these components that are AI created, AI maintained, that probably builded React or some other type of tech that's like that. And yeah,
and basically going straight from picture to a certain type of code. Does that make sense? Like this is, I think a lot of people really struggle with that because we can't even imagine more than one UI. So when Jacob says, well, you know, apple.com is not going to be accessible. Well, that's actually not what he's saying. He's saying it's going to be individualized per person. So,
you're actually going to get the best experience you have for the type of senses and the type of capabilities you as a person have. And if you're 10, you might get a very different UI than if you're 50, for example.
And so forth. Like that's just the beginning. It's literally individualizing the web and it is within our grasp right now. So all the ways that we're doing things are going to completely change. And this is just the beginning of the controversy that is going to result from this. And I can't wait. The next 10 years are going to be the biggest turmoil of our lives. It's going to be so...
Freaking crazy. It definitely will be. And talking about personalization is still something that, as you have said, a lot of people can't really like, you know, they don't understand it. They don't believe in it. They say no. How should that even work? But a lot of things are changing and it's so difficult to predict the next 10 years and that everything stays the same. That's impossible, right?
So I think there will definitely be a lot of changes. One thing that I am very interested in to hear your opinion on is what about all the people who are designing the perfect outline in Figma, who are designing design libraries? We can't all design storyboards in the future. So what is with the rest of us? Well, absolutely. I think there is the good news is we actually need more designers.
than we have today. I think it's going to be a very useful skill. The challenge is what parts of the skill are going to be useful. If you're talking about
Let's compare it to kind of early car mechanics. So in the earliest days, the cars were built by hand. So you had to know a lot about design. You had to know a lot about manufacturing and you had to kind of do your own tools, maybe have a turning thing where you could just create, let's say, cylinder heads.
Well, no mechanic now is going to be doing that unless you're very, very kind of custom high-end stuff. So people instead have specialized and they specialize, they went at the design and the interface design, plastics design, interior, exterior, tuning the motor, all of that kind of stuff. So
So rather than kind of bespoke design is what we've been doing a lot. This is now going to be a design at more industrial scale. And rather than it being sort of this white tower of, well, we're the priests of customer knowledge, you know, you must come for us and you must go for us to talk to customers.
well, that's not going to work anymore. It actually hasn't been working for a very, very long time. So if you still think it's your only priority,
your prerogative and your right to be the only person who talks to customers, I got news for you. That's not going to work. So it's now, it's going to be up to everybody to do it in the same way you need to as a UXer, you need to understand the capabilities of AI models. You need to go and build a few little bots for yourself to play around with this. You need to go and try out the journey, really understand what works, what doesn't, what makes a tick,
And how do you control it? And how do you get to do what you want? So it's kind of up to you to understand it. So it's at the same time much more wider and it's more specialized. So absolutely, we're going to need folks who are working in design systems. In fact, I foresee that as one of the probably top areas
the employment opportunities for us. Once the people realize that there are tools out there that can essentially take a picture of the interface and then make that into working code,
there's going to be all these folks that are needed to actually create these components. But to do that, you need to know a little bit of React. You need to be a little bit of a unicorn. And you need to go and create those components, try those components, document those components, and create systems that sort and...
and correctly kind of create the right component on the right prompt. So there's a lot of that's going to happen. So if that's kind of your calling and you're really good at, let's say, auto layout and you're really good at Figma and you want to push pixels, that's where your future can be very, very bright. And that's really good. But you need to absolutely going to need to learn some code. There's a glimpse into this. Actually, there's a fantastic...
article by Brad Frost. He's got a new company. The name escapes me for a moment. But if you just Google Brad Frost AI design systems, you're going to come up with a whole bunch of stuff that he's been writing about. He actually gives you the code that AI has been producing for the components. As you know, he's the author of the atomic design systems that we've been using very heavily last decade or so. And
I'm a huge fan of his and I've been following with tremendous interest his advances because he's really trying exactly what I've been telling you guys. He's taking a component and then creating the code behind it and then using AI to do that and to do the documentation and to do the maintenance for it. So definitely, definitely check out his work. He's once again on the forefront of pioneering AI.
this type of application. But there's definitely a future for you if you like to push pixels and just have to learn a bit of React and
um, a lot of AI, I would say. So design systems, definitely a huge. Now, if you're, if you're more like me, who is, who's a generalist and I like to do a lot of research, I like to do a lot of ideation. I like to do a lot of patents and things like that. Absolutely. There's a huge bright future for you. Now I've been super lucky to be
to be kind of here at the right time. I've got a lot of patents, a lot of patent pending that I help my customers and clients create, but you can too. In fact, UX, I would say, is the huge on-ramp for doing that. Securing intellectual property is a huge opportunity for UXers. Not only is this directly innovative, but it allows you to
find and fine tune the invention to the customer needs so that not only are you just inventing something out of the blue, like bathtub on wheels, that's not going to help anyone, right? But you're actually creating something that is, well, there's a patent for that actually. So you're actually creating something that the customer needs and then the company can invest in and really make innovation happen.
happen. So instead of being just innovation or just designer, think of yourselves, you're an ombudsman of innovation. You're kind of an advocate for innovation. You're an advocate for AI. You may not think this is very significant, but it's huge because the only way that the companies can secure a patent today is through a design patent. So there's no way to patent an AI model. And anyway, it wouldn't make sense
because AI models evolve so quickly these days. Just think about it. In the last year, we went from ChagGPT1 to ChagGPT4 now, right? And we hear 4.5 coming out. So it's literally happening so fast. You can't really patent the model itself because maybe the way you train the model, if it's innovative, that's something that's good. But
Again, that's very much on the periphery. What you can patent is an experience that AI then is creating for your customer. That is huge. And then thinking of new ways to apply AI to experience to really get the value. Sometimes the value is going to be exponential. Look at, for instance, look at my articles on the Microsoft Copilot and the way they've been trying to put it in all of their different products. Some succeed, some do not.
guess which ones succeed the most, where UX designers really thought it through and have done the research and have done the work and have maybe created some pre-prompting. So I don't need to figure out the prompt to write. I can just click a button and then off it goes, right? Like there's a lot of tension between actually how you prompt it, how you work with it. And that's just LLMs. Also things that create images that...
that use images that then leverage that for a better experience. So tremendous opportunities. Think the way 90s were with the internet. This is that on steroids. It's like 10x. And UX should be right there in the middle. Again, because without, I think, UX know-how, it's just...
It's just going to be more of an academic project and it's going to have the feel of that and it may not succeed in creating their ROI. In fact, according to Forbes, not according to me, according to Forbes, right? If we don't trust Forbes, who can we trust? 85% of AI projects fail to produce the ROI. 85%, think of that number. That is huge. And still people keep trying. Why? Because the opportunity is tremendous and folks do recognize it.
But without UX, that number is 85%. With UX, that number is much, much better or should be. And it's up to everyone on the call to really put their best foot forward, learn it, engage, be the ombudsman of innovation and really move the needle. And not just money, right? We're talking about
legitimate impacts to our society. Anything from fake news and lies and election denial and so forth, right? And the rise of very unfortunately right-wing ideologies around the world, all the war and famine and upheaval. Well, guess what? With AI, this can potentially get considerably worse.
because it's so much easier to lie with AI, to create fake videos that look so real. You can't even tell the difference. And we're not just talking cats. We're talking about, you know, or fake voices. This is totally fake video that looks indistinguishable from the real thing. So we need to engage as a matter of urgency because without it, our lives as we know it are going to be impacted and
in ways that are very, very negative. So I hate to say it, but UX is really going to be a way to get to the AI ethics and really understand and put the guardrails in place so that AI is ethical, is used ethically, and then it actually has a positive impact on our politics, our environment, our freedoms, that we preserve our way of life and even improve it.
It's a tremendous tool, but it's also a tremendous danger. And we need to be engaging with that. Without it, it's going to be a very dark place. Yeah, I agree. So first of all, I really love that you emphasize the importance of your ex, highlighting that your exes have a great future.
Of course, it depends a little bit on like where you focus on and how you present yourself, right? Like if we dive into AI, learn the tools, the methods, maybe a little bit of code, the future will be much, much easier for you as a designer, of course, but your X will still be very relevant for all AI products, for everything that's ahead of us, right? And you mentioned also the ethical part, and maybe you can give us like,
One, two tips about how to deal with all these ethical questions that we are currently facing, right? What do you do as a designer? How do you encounter basically these ethical problems?
Absolutely. I think it's just a beginning, really, and we're just starting to explore it. But let me give you a couple of examples that I like to use. So one is just understanding the outcomes of the predictions. So most AIs, all they do is just predict. And that's one of the forecast, some type of outcome. So most AIs today are trained on something called confusion matrix.
And it may sound very confusing, but it's not meant to be. So all it is is just a count of different outcomes. For instance, if you guess correctly,
it's a true positive if you guess. Let's say you're trying to predict whether something will come up, heads or tails. You flip a coin, right? So if AI predicts correctly, it's a true positive. If it predicts a tail, it's a true negative. If it guesses wrong, again, false positive, false negative, right? So there are four different outcomes that are normally associated with just kind of a single prediction.
Harder to do it with my hands. But so when you talk about these four outcomes, so most things again are trained confusion matrix. So all we're doing is we're just doing the count. We say, well, this particular AI model guessed correctly X number of times and then correctly Y number of times. And then you do these various calculations. You come up with data science metrics like accuracy, recall,
Excuse me, like accuracy and recall. And so when we talk about AI, we equate the term accurate AI with the terms like good and safe. Well, actually nothing can be further from the truth because let's say you have a very accurate AI that is trying to figure out who is a terrorist.
you're doing your check in at the counter for the airplane. And there's an AI that's kind of scans your face as you know, is this a terrorist or not? And then if AI decides you're a terrorist, you get pulled out for secondary inspection. So let's just give that scenario. Well, in this case, an accurate AI would pull nobody aside. Again, an accurate AI will pull nobody aside. Why? A million of people who are getting checked in, maybe one in a million is a terrorist.
AI that is accurate to 99.9999999 is going to be fine. Who doesn't want an AI that's accurate to 99.99999%? And all this AI has to do is say, nope, not a terrorist, not a terrorist. So it's never going to pull out anybody. It's never going to be wrong, or it's going to be wrong one time out of a million, but that's the one time that we're supposed to catch, and it's wrong. So this AI that is very, very accurate
is actually actively harmful. It's not just a bad idea, it's actively harmful. It does exactly the opposite of what you want. And let's look at another use case.
So in the Boeing 737 MAX has been plagued by a lot of problems, including the door that flew off of it, all that kind of stuff. Actually makes me a tiny bit apprehensive to fly. But no, I trust aviation officials and very excited to be going to Copenhagen. So I'm not going to let that deter me
deter me from going to Copenhagen, Lisbon in the next couple of months. But what I should say is there was a couple of incidents right when that airplane came out where an onboard AI would try to overcorrect. So a sensor was wrong, but AI kept pushing the nose of the airplane down, thinking that the airplane was kind of popping up and stalling. So while flight stability is important,
Pushing the airplane nose down toward the ground right when it's trying to take off is a terrible idea. So when you look at the graph of this airplane trying to take off, every time the pilots would pull it up, AI would force it down. So it didn't lose any opportunity to pull the nose down. That AI was very aggressive.
He was trained to be aggressive, not to miss any opportunity to interfere with the flight pattern. So again, you got something that's very accurate, that's very harmful, and you got something that is very aggressive, that is also extremely harmful. So in the case of airplane, you wanted AI that was very, very accurate, that was trying very hard not to be wrong.
in the case of the terrorist scanning, you want, again, not a very accurate AI, but a much more aggressive AI that would take a lot of guesses, maybe be wrong many times, but could probably prevent a huge disaster down the line by finding the terrorist. So,
You got two AIs that seemingly were trying to help people, but actually did the opposite. They were actively harmful. And in the case of Boeing, it killed 346 people on two different flights. And so whenever AI is interacting with the real world, you have actual potential outcome to the prediction. And so one of the things we can do as AI
people who care about ethics is understand what is the value of that outcome. If the AI guess is right, what is the value? If the AI guess is wrong, what is the value? How do we prevent the wrong outcomes? How do we optimize? Because most AIs and most use cases are not going to be overly aggressive or overly accurate. It's going to be somewhere in the middle. So how do we balance that? Now,
That's where UX comes in. That's where product comes in. We need to ask the right questions. We need to say, what is the value of the outcome? And then optimize and train this AI on these values. This is a way to inject human values into the AI training. And that's what would make it a little bit more ethical. Now, AI has no knowledge of itself. It just does what it's been trained to do. It has no...
agenda. It's not trying to kill us all. It's not, most, most robots can't even, can't even operate a doorknob these days. So don't worry about it. Right. If you're, if you worry about the robot apocalypse, all you have to do is, you know, buy some knobs that, that, that turn like this, you know, that around and then lock the door and climb up on the kitchen table and you'll be fine.
I assure you. But at the same time, you've got all this AI that is operating in the physical domain that is guessing wrong or because it's been trained in the wrong way, because UX was not involved or wasn't asking the right questions. It is up to us to ask these questions. It is up to us to ask uncomfortable questions about what would happen if this was wrong? How do I turn it off?
For instance, I was driving a rental BMW and it had a driver's system that kept getting confused around my neighborhood, around the lanes. And it kept trying to pull me into the curb when I was trying to drive straight because the lane was painted incorrectly. So it just kept going on the lane. It kept trying to do it. Now, with the UI that they had, I could not find a way to turn it off.
So I had the exact same situation that the pilots had. They could not turn off that AI. It kept forcing the nose down. So I could not find it. So I had to pull over and then the option became available. I mean, that defeats the whole purpose, right? Of being able to turn off these automated systems. It should be right there, big button, turn me off, right? How can you not make that available while you're driving? That just makes no sense.
So shame, right? Shame on those designers and shame on whoever didn't think that they had such confidence and tough questions. Make sure that you can turn the damn thing off to begin with. Now, that's where...
Your ethics training, your understanding of the human values and how to impart those values to the AI is going to be critical for the next 10 years. It's going to be critical because, again, AI is just way too important to be left to data scientists. That's true. We need to get involved. Then things like with your BMW might happen.
Or you never know who agreed on this interface, maybe a product manager, product owner. So there are so many people involved. And the more UX designers are also... And once you have an investment. Yeah, investment. Yeah, that's true. So, yeah. Exactly. Once you invest in like with cars also or airplanes in general, it's very difficult to retrofit.
And so you kind of need to get it right the first time. A lot of physical objects are like that. Now, fortunately, many of these features are in outside of driving, I would say, are an outside the unfortunate example of Boeing. They're essentially driving aids. That's where you can...
You can basically use AI to say, well, there's something in my blind spot, or I need to be able to
have the lane assist situation or I'm trying to turn or my map is saying I'm turning but I forgot my turn signal, that kind of thing, right? So it's driving aids, it's assistance. Now, this is an allegory for, of course, industrial processes like what we talked about at the beginning with the watering system. So think of AI applications first and foremost as these blind spot indicators.
and not as a self-driving car, because then that decreases the potentially devastating effects of it getting the answer wrong. And it also helps you deal with the situation of you're trying to go against the expert. Like what that company was trying to do is go kind of against the knowledge of this farmer who's been doing this forever. So if you're doing AI for industrial world or for industrial IoT, internet of things or anything like that,
just consider it being a blind spot indicator more. Just really push for that. And then
While AI is then deployed and operating, it can be an ongoing basis collecting the data that you need to make the model better. And then yes, eventually you might make the model to where it's actually assisting your driving and assisting the turns or saying, hey, you look really tired. Would you like to pull over and get a cup of coffee? Seems like you're really weaving. Like that kind of stuff, which is a lot less of a negative impact
So that's where we need to start engaging and that's where we need to start building up our muscle, our AI muscle, if you will, and our AI ethics, understanding where the data is coming from, polishing the data and making sure there's no bias in the data that's going to negatively affect a certain group of people or a certain outcome. Right. And so all the things that we associate with technology and ethics, right.
And of course, don't misuse it. The folks that were part of Facebook before it became meta and was forced to change its name, everybody seems to have conveniently forgotten about that. There was an algorithm that promoted certain very
reactionary views right around the time where people were already supercharged. Now, that was the choice that they made. It wasn't something that they were forced to do. It was a choice that the company made. Was it an ethical choice? I personally do not think so. So it was really up to the designers and engineers and product managers to say, what are we doing here, really? What will be the outcome if this thing really does what it's supposed to?
what we're designing it to do. Yeah, it's going to make company more money, but what will be the outcome on society? So, you know, try to think a couple more steps ahead, right? Just, you know, common sense. Ask uncomfortable questions. Be that guy in the room or that gal in the room that says, hey, what are we really doing here? What if it really does what
Yeah. Or what if it gets it wrong? Like what's going to happen? Yeah. Can we just spend five minutes on that? Sometimes that's all it takes. Yeah. It's just ask the right question. Yeah. And I think your ex-designers are perfect to ask uncomfortable questions anyways, right? So I feel like they're the perfect, they're known for that. And they're the perfect advocates for all the ethical topics. Yeah.
Greg, thank you so much also for diving into all the ethical components. I think super interesting with all the examples that you shared. I feel that we could continue for hours. And really, like, I still have so many questions for you and you have so much knowledge. So I am honestly really sad that this interview is almost over.
But for everyone who would like to stay connected with you or learn more about the things that you're teaching, how can people reach you? How can they find you or where can they find you? Thank you. Thank you for this opportunity. It's been a blast talking with you. I think you're
Yeah, I hope we can do so again. Maybe revisit in a year or two and see where we are. So my website is very easy to remember. It's ux4ai.com, U-X-4-A-I spelled out. And there are all my articles, the blogs, the events that are coming up, the
And I am touring and speaking. In fact, I have a speaking engagement at UX Copenhagen coming up in the workshop there. So if you're in Europe, anywhere near Copenhagen, I strongly encourage you to check it out. Definitely. It's a really cool boutique place.
UX conference. I've been trying to get to speak there for years, so I'm overjoyed that finally things worked out post-COVID and I'm heading over there. And so I also have an engagement coming up in Virginia, which is a very search-based AI kind of stuff, which has really not been in the news nowhere near as much as Chad GPT, but that is
There's a lot of interesting and very exciting new developments that are happening in search and AI-driven kind of search interfaces, which allow us to handle some of the very interesting new use cases. And then the last engagement is the one in May that's coming up.
And that is in UXLX. That's in Lisbon. That's coming up in May, also teaching the workshop there. So again, if you're in Europe, you're in luck. You have two great opportunities right near you on kind of two ends of European Union. So you can pick your location that works for you. And one more I should mention is a remote online UX salon where we're talking about UX...
UX for AI writing. So AI kind of how it influences your writing style and what you output as a writer. And eventually book six will come out and will also be available through that site and probably for Amazon and normal channels. You can also find me on LinkedIn where I have a
a, a newsletter that's, that's seems to be so far well-received, you know, let's hope, let's hope more in the future. And I really hope you just come up and engage me because I'm very, very much in favor of understanding what are people struggling with right now? What can I do as a UX member of this UX tribe to help you
get your sea legs in this kind of new normal. So please reach out to me on LinkedIn on the ux4ai.com. And let's start a conversation how we can all be better at this new tech and how we can all contribute to a betterment of society together. Perfect. That sounds awesome. And amazing last words, I would say. So Greg, thank you so much for your time. I really appreciate it. And...
See you soon. See you soon. Pleasure. Thank you. Bye. Bye.
you