This is the Nielsen Norman Group UX Podcast. I'm Therese Fessenden. The term UX has long been used synonymously with user interface or UI, but these aren't exactly the same thing. And over the past several years, many firms have realized that the design decisions they're trying to solve tend to be a bit larger than tweaks in visual design could ever solve.
With this expansion in scope comes an expansion in data points, and also a critical need to manage and use these data points to create more meaningful experiences. But how do you manage these massive data sets? Well, one way to do that is artificial intelligence and machine learning. Today, you'll hear my conversation with Dr. Kenya Odor, founder of the UX research and design agency LeanGeeks.
Kenya is a human-centered researcher, strategist, and solution designer. Prior to founding Lean Geeks, she was the Director of User Experience at LexisNexis and also was a User Experience Engineer and Project Manager at IBM. She holds a PhD in Human Factors from NC State University and is also trained in Experimental Psychology and Industrial Engineering. In this episode, we discuss the increase in the scope of UX work, how coexistence with AI and automation will change UX work,
And finally, how teams can avoid bias and improve strategic thinking when analyzing large sets of data. With that, here's Kenya. It seems like there's this shift in the field. Like, there's more intentionality around product and service creation.
We used to focus more on interface elements or how do you design the perfect widget or the perfect button. And now many firms have started thinking about their products as services. Could you speak to any interesting themes you've been seeing in the space of service creation? Yeah, yeah. So when I reflect back on my roots in human factors and industrial engineering, I
We were being trained to think from a systems design perspective or a system thinking perspective. And the differences, I think you described it really well, the differences in my application or the application of what I was learning while I was in school was that it was more focused on that micro interaction with an interface where what I was learning in school was around the study of work.
And when you think about the study of work, you think about the people in that context, the training, the environment, climactic factors that have to be the equipment. So it was a part of the foundation, at least for me and my training to think about all of those things.
But when I would go to work, it was always about just that visual interface or the mode of input. And so what I've seen is things kind of come full circle, at least in my own experience, where we're starting to come back to that systems thinking or systems approach to design. And with that service design is a huge component or factor in that. And so when we think about one's experience with things,
sometimes when you have someone come back and tell you they had a really great experience or really terrible experience, it goes beyond just the interfaces. It goes, you know, it speaks to the way that you experience other aspects of that journey or that service. And so I think as an industry, we recognize that you have UX people that have those capabilities. Why not leverage them? And also when you think about
the overarching goals of your organization, you serve your customer from awareness all the way through to support. So you've got to think about all of that as the service. Yeah.
Yeah, definitely. It does seem like a lot of organizations are growing beyond a kind of myopic focus or, you know, really short-sighted focus on the visual details. Like, yes, the visual details are important and definitely part of that process, but we're now thinking of it. I like what you said is like coexisting. It's like this ecosystem of coexisting.
of things that are happening in a user's life or in a customer's life, including when they learn about a brand all the way through to when they're getting support. So yeah, that is something that's really fascinating about the space for sure. Now, I'm also thinking too, this ecosystem, we have, of course, a lot of digital products, but we also have some physical products
But that also means we have a ton of data, more data than we've ever had before. And I can imagine big data is only going to grow bigger, especially as we have Web 3.0 on the horizon. So
What do you think teams need to be especially aware of or cognizant of when they're starting to maybe utilize that data? But of course, maybe it's not humanly possible to deal with every single data point. Maybe we've got to automate certain things, or maybe we've got to turn to machine learning or AI to kind of solve problems. Do you have any advice for teams on what they should be thinking about?
Yeah, yeah. So way back when, when I go back to my dissertation research, a lot of my interests and focus were on the spectrum of automation from fully manual to fully automated. And a lot of my interests at that point and currently are on the fact that we've got to understand when we create these systems that wherever you fall within that spectrum of
artificial intelligence or automation, we've got to think about what does the human in the loop have to understand? What do they have to give and get from the system so that they can ultimately develop trust? That trust building and that trust relationship is just as important as it is between you and I when we transact.
So I think that's a big part of it is understanding that it's not just about managing the data, serving up the data, analyzing and presenting insights around the data, that big data. It's also about what do I give that person to allow them to trust the data and to understand where it's coming from and what goes into it. One of the big areas of looking at the AI space that I'm passionate about
is making sure that we consider that whole garbage in, garbage out.
statement and that what goes into your training data set has a lot to do with the utility of what comes out. So if you have not factored in in that data, understanding what makes up that data, does it make up the full set or gamut of what needs to be considered, you're going to have limitations that are baked into the system that you may not be aware of. And those limitations become biases. But then
Ultimately, they impact the person consuming them. Their decisions are not as robust or informed. And so I think, again, this goes back to trust, but a lot of it means understanding foundationally what are people trying to do with this data and is it a sufficient data set for them to work with?
Second to that, I think it's also going to change the definition of work. And so obviously we're already seeing where automation and AI are taking over some aspects of what we have historically or traditionally called work. But we have to also think about as individuals, what does that mean for my role? How do I need to reevaluate the capabilities that I have that maybe I underestimate and
or don't currently utilize but have. And so we've gotten into coaching people around that because you sometimes forget inherently those strengths or capabilities that you have that cannot be today, cannot be replaced by AI, automation, that kind of thing. And sometimes it takes things as small as
asking a friend, you know, when you think of me, what comes to mind? What do I do differently than anyone else that you really love? Sometimes questions like that can trigger you thinking differently about your own capability and career and how you're going to coexist in a world where AI, big data automation are doing a lot of things that we did historically.
Yeah. Yeah. That's a lot to think about as well. I didn't even think about the fact that work itself is going to look different because we're going to automate certain things. And maybe there's certain data analysis that might be very useful in this day and age, but
in several years, that data might be analyzed by a machine and we might not need to extrapolate data in that same way. So yeah, and I actually want to dig in a little bit. You mentioned that establishing trust with AI is something that is critical to its success. So
What do you think are the things that prevent people from trusting AI? Because it does seem like there's this uneasy sentiment about it. And I'm curious what you've found in some of your work. Well, first off, I think it starts with what is the data or the AI being used for and making sure that you set clear expectations for the consumer.
of that AI capability, that this is what we expect this can be used for. It doesn't go beyond that. So it's kind of like you're in dating.
If we're dating and you let me know up front, this is not going anywhere, this is not going to lead to marriage, you set the expectation that this relationship can only go so far. So set that same expectation from the developer's perspective, set that expectation in how you present the information about the system. I teach HCI to software engineering students in the computer science department.
And a lot of what we look at is analogous to hardware systems or physical systems and how we look at software systems kind of in the same way. So one of the examples we use is a bicycle versus a CPU or a desktop computer, the big box that we used to have under our desks. One is black box, whereas you can see all the pieces of a bike.
And how the chain goes around the spoke and, you know, when you pedal, you see everything moves. So the ability to kind of see what's happening and how you're producing action.
It's very different than when it's black boxed or I'm just typing something at the computer, something happens in that black box and then something comes back to me. So one of the responsibilities, I think that's also required to help build that trust relationship is to kind of remove some of that black boxness of your system. And you don't necessarily have to show how, you know, different connections are being made, but demonstrate some amount of the inputs and
What's being done with those inputs and the outputs of information and actions that you take with a system so that people can understand, have a mental model of sorts of how that system works.
Yeah. Yeah. The visibility. I love that you said the lack of transparency. That can often just kind of leave you with a lot of questions and questions don't exactly inspire trust. So absolutely kind of establishing some expectations. And I think at least one of the usability heuristics that pops into my head is like visibility of system status. Just what's happening? What is this capable of? Yeah. Yeah.
Yeah. And I guess I also want to dig into how you think, I guess, are we going to get automated out of a job is sort of what pops into my head when thinking about how work might change. Like, what do you think might be hard to automate or that, you know, we as human beings certainly have a leg up in certain cases? Yeah.
Well, there are a lot of limitations to AI today because AI is only as good as its training data set. And it's not well suited today to account for new inputs that were not a part of that training. Taking information that we've learned. So when you think about
your ability to perform certain actions like cooking or riding a bike or some of those things that you learned when you were younger. There are things that you learned that you may not use for decades, but you still have that skill encoded in your long-term memory of how to execute that procedure, essentially. If the AI is not trained to do that,
And today we're not at that place where we've been able to create AI that can actually formulate new ideas and thoughts that are not based on what it's seen as inputs in the past. We still have that leg up of that context of experience. We also have emotional intelligence.
We have the ability to provide you, like you think about experiences at stores like Starbucks or other companies that really lean in on or double down on customer service. Some of those things require an individual that's going to be able to assess as a customer walking up, what mood is this person in?
How do I respond to them where the way in which things are coded into AI won't necessarily be able to go off course and come up with something new based on what they're encountering? So I think that we have the ability to use a lot of those soft skills that we call soft, but I think they're really important. I think we have the ability to leverage a lot of the soft skills that we have as humans where AI can't do that to them. Yeah. Yeah.
It's really fascinating thinking about how much that data set really does shape what an AI, A, can do, but B, what they can't do as well. And you mentioned earlier that being cautious or just conscientious about
the data that goes into that training set. Like what are some of the ways you can do that to be more conscientious? Like what are the things that teams should maybe look for in order to make sure their, their training data sets really as well rounded as it could be. The first thing that comes to mind is, um,
The composition of your data science and your machine learning and your your A.I. team. There may be individuals or characteristics or when I think about data around humans, for example, there might be things that if you don't have a lot of different thinkers on your team or people with different contexts, they might be missed.
And it's not an intentional kind of thing, but it might just be that because we all think along the similar lines or have similar contexts, we think about that data from that perspective of all of us in the room being the same or similar. So opening your teams up to diversity, not only help
In terms of kind of the diversity movement overall, but it helps in that thinking when you have brainstorming and collaboration going on where people are coming from different perspectives and those different perspectives and context, whether it's around neurodiversity, physical differences and that kind of thing. You want people that come to the table with those differences because that helps people.
foundationally consider the fact that things need to be kind of broad in their consideration. I also think that when you think about how you position your AI, you've got to recognize that in selling the concept, you have to level set your consumer with the fact that our system works within these guardrails of parameters.
So, on the other end is the consumer. When you walk up to a system where someone says it can do these wonderful things like a driverless car, you want to be able to take it out of the context it was trained in or that training data set falls in and see whether or not it can perform outside of those guardrails. Because if it can't, that sets for you the expectation of what you can get from that system.
Hmm. Yeah. So it does seem like, especially I'm just thinking of driverless cars, you know, having sufficient exposure to not just a broad data set, but the right data set that is representative of what's happening in reality. Like you can't drive a car in a parking lot and expect that it'll do well on a highway. Exactly. Exactly. Yeah. So related to that, you know, thinking about
data-driven decision-making. What mistakes do you see a lot of teams making? It doesn't have to be limited to AI. It could be other types of data-driven decisions for designs. What mistakes do you see teams making? And then maybe what the high-performing teams, what are they not doing? What are they getting right?
So in my experiences, I've seen teams that lean on data that's available or data that aligns with their perspective or opinions. I've seen, you know, those are the age old problems of biases within data analysis or data driven decisions is only listening to or consuming the stuff that you feel aligns with what you think.
That's the biggest problem we have in a lot of our current challenges that we have in our society. It's only listening to that voice that confirms what you believe. What I think is problematic is when people also don't lean on any data, where it's a lot of assumptions. And I always tell teams that we work with,
When you start to say, I think a lot in conversations, it's time to pause and go and get some data and make sure that that data collection, again, is not biased around, I want to find something to confirm what I'm thinking or feeling. High-performing teams get it right when they have experts involved in that data collection and analysis. I've seen people in roles where they're responsible for
defining how to gather the data and also gathering it and analyzing it where they probably didn't have expertise in those areas. And so there were some problematic aspects of the data or biases baked into it that the rest of the team's not aware of. So having experts in terms of your partnerships within your organization, but also making sure that
Not only having the data, but knowing when to stop and pause and get the data. So a lot of times teams will keep going down a certain path and they don't stop to say, hey, maybe we need to go and validate, get some data, basically gather some data and validate this concept or this assumption that we have.
But they keep going and then they find out later on, oh, we got to go back and rework or throw away or, you know, put something in the backlog, that kind of thing. So timing, right people, right. And also making sure that you have the right data objective. Yeah.
Yeah, there's a saying, well, not saying, but one of my colleagues, Cara Pernice, she regularly mentions her gripe with the word validate because it often implies that, oh, we already know it's correct. We've just got to make sure it's correct. So that confirmation bias doesn't always get eliminated with the validation step. But if we treat it like validation,
it's validate slash invalidate, then we can make sure that we're actively disconfirming things that we expect to be true so that we can make sure we get to, you know, objective truth. Yeah.
I think so. I always lean to her whenever I'm wondering if I need to double check my research plan. Like, does this make sense? And does this have the appropriate means by which I can check my assumptions? It can be hard to get right. Even if you are cognizant of it, it does help. Mm-hmm.
So I guess my last question is, you know, if people are building something that doesn't exist yet, this is something I'm often asked, you know, I don't have a design. Like, how can I research if I don't have a design? Like, do you have advice for that? Yeah, I actually had a meeting yesterday with a prospect, a team of folks that have an idea and they started the process of creating their own wireframes.
But the interesting thing is, is one of the questions that came up in the room was around, well, how do we validate this if we don't actually have a working product or, you know, even a concept? And in my experience, we've used.
Framing things from an opportunity standpoint, and this goes back to some of those design thinking framework opportunity statement concepts where you frame it around who you are in that context, what you're trying to accomplish, and what we're going to do in terms of solving or helping you do that. And framing things in that way, providing something in the form of like a storyboard
That gives the person kind of sets the stage of what's happening, why it's happening, what may have occurred and why you potentially need this solution or can utilize this solution. So providing people with that context and giving them an opportunity to share ideas on what they expect, what would work in that scenario versus not, that gives you some
It gives you some more, it kind of progresses your idea in terms of, are we going down the right path or do we need to shift our understanding of the idea? And so I think it starts with first having those conversations with people outside of your team.
And just kind of going through from the abstract to the concrete. So taking that and then turning it into what does a concept or a prototype look like? Put that in front of folks and see, you know, hey, we talked about this scenario and you said that there was an opportunity for us to solve around this. What does this look like in that context? You know, does this feel like something that would make sense? So getting some of that
feedback. And again, this goes back to knowing when to get the feedback is just as important as the feedback itself. So as you use those conversations as a way to kind of shape that and mold that idea, it's almost like a co-design effort that allows you to get to a place where you're not just designing in a vacuum and then kind of like the big reveal and people are like, wah, wah.
I don't care. It's like, they're not like, oh, wow, what is this? So it kind of helps you validate or invalidate along the way what makes sense. Yeah. Yeah. So it's two parts.
I'm imagining. So the first part being, you really need to know that problem space really, really well. And that means talking to your prospective customers, even if there isn't an existing product, right? It could be, how do they solve that problem right now? Maybe they don't. What does that mean for them? What are the
What are the things that impact them as a result of not solving that problem? And then the second part is you start to co-create this new idea based on the previous information, but also maybe based on some additional check-ins with that group of users. It doesn't even need to necessarily be the same exact group of people, but it could be someone representative of that group and you can continually get feedback. So I do think that
That consistently iterating and not waiting for that grand reveal before you get that final feedback, but quickly just making small changes, but basing it off of some new data from that group of users. Yeah. And I have also found that the real key to that is make sure that you understand the demographic structure.
variety or spread of the people that you're targeting. Don't just find the people that are easiest to access. And I think this goes with any research. And we at Lean Geeks try really hard to do this, even though some groups are harder to gain access to than others. But you'll learn some really interesting nuggets.
that you didn't think of because everyone's cultural context is different and cultural context means a lot of different things. And I think that's where a lot of companies, especially today with social media, we're seeing a lot of fails in terms of, you know, being sensitive to and also understanding cultural context.
Yeah, yeah, absolutely. It takes a lot of research and a lot of listening. Now, I guess what also comes to mind is
when thinking about designs and kind of the future being mindful of who you're catering to, we might not always be lucky enough to not have a product and be able to start from scratch, right? I'm also thinking of the folks who have like really gnarly legacy systems or, you know, either they are building something to coexist with it or maybe they're starting new. But what advice do you have for those who are less lucky? Well, I mean, it's funny because I...
I don't, I like to solve problems. I like to have really big, complex problems with different types of users and like unpacking what are their motivations? What are they doing? How can we help make their ecosystem more effective or their existence in that ecosystem more effective? And so when I think about that, the challenge of creating something new can be just as
or the challenge of working on something legacy can be just as interesting as creating something new, depending on how you look at it. And you just described the scenario. How do you build something that coexists with that legacy system? We have lots of systems and products and tools out there that
our legacy and will be that way for a long time. So it's figuring out some novel ways to serve up their value or to keep them going without throwing the baby out with the bath water. So you can't just scrap it and start over. You've got to figure out creative ways to serve up that value differently and to leverage what's out there now that we didn't have 15, 20 years ago. I think those types of challenges are so interesting and,
sometimes not as rewarding for people from a design perspective. And when I say design, I mean like the visual design perspective. It may not be as interesting in that regard, but there's so much more to me of interest from a systems perspective, a systems design perspective. I had a coworker years ago that was a product manager on a legacy product. And he was comparing himself to another colleague who,
Got to work on the new sexy stuff. And so he's like, you know, oh, how do I sell myself at the next company or looking for my next role? How do I sell myself when I'm working on these old clunky systems? And I'm like, there's a huge opportunity there to brand yourself as someone who's owned and managed legacy system evolution, you know, and
and how to make end of life decisions, but also where do we continue to leverage capability? Like that is a whole skill. That is a whole domain and discipline that's necessary. And so I don't know if that made him feel any better, but I was like, you know, there's a lot to be said about people who have that capability to think about some of those limits, technical limitations and debt in the context of creating something new. Yeah.
Yeah. Yeah. So in a way, maybe not thinking about it as an unlucky problem, but a lucky problem where I guess the other way I've heard of it described is it can only get better because it can't get much worse, right? There can be some excitement in solving some of these super complex problems, especially if we can think of it as
a problem solving opportunity and a way to really think a bit bigger than some of the quote unquote easier problems. So yeah, and certainly won't be automating out of that job anytime soon. Exactly, exactly. That was Dr. Kenya Odor.
You can learn more about her work at leangeeks.net or by following her social media channels, which are linked in the show notes. You'll also find in those show notes some links to NNG articles and videos, but there are plenty more where those came from at our website.
By the way, there are some upcoming opportunities to learn, either virtually or in person, and details on those events can also be found at our website. That's www.nngroup.com. That's N-N-G-R-O-U-P dot com. Finally, if you enjoy the content you hear on this show, please leave a rating and hit subscribe. This show is hosted and produced by me, Therese Fessenden. All sound editing and post-production is by Jonas Zellner. Songs are by Tiny Music and J. Carr.
That's it for today's show. Thanks for listening. Until next time, remember, keep it simple.