This is the Nielsen Norman Group UX Podcast. I'm Therese Fessenden. With all the tech layoffs over the past couple of months, many UX teams are shrinking in size. This downsizing of UX labor means that UX research is harder to do, particularly in organizations where UX maturity is relatively low. But even in organizations where UX has historically been really valued, time and budget constraints are real.
And they do mean that UX research needs to be prioritized in really intentional ways. One way to do that is to use discount inspection methods to identify high priority design issues that need further research. To explore this topic, I chatted with Evan Sunwall. Evan is a UX specialist at NNG with over 15 years of experience studying UX designs, team dynamics, and career progression trends.
In this episode, he shares his journey to UX from team of one to a team of many and offers tips on how to present findings in a way that resonates with stakeholders. Without further delay, here's Evan. So I studied computer science. It was the trinity thing to do back just before the dot-com bust where there was a big deflation in the tech market back then. And so I went through that. I was studying it.
I didn't love it. I didn't really enjoy it that much, but I was already kind of 75% of the way through. And so I decided I'm going to get this degree and then I'm going to kind of think a little more about what I want to do. And at the tail end of me getting that degree, I discovered human computer interaction. And this element of kind of the human experience of using technology was something that I was always kind of seeking and looking for, but I didn't have the words and I didn't understand
if that was even a field. And then I discovered it and I was like, oh, this, this is what I wanted to kind of get into more. And so I read Don't Make Me Think by Steve Krug. I found useit.com. You can still go to that domain. It'll take you to NNG's website. And I started reading that. Yeah. Yeah. And I was like, oh, this is really interesting.
And so from there, I applied to graduate school. I was like, well, let me go study this. And so got into graduate school, but you know what? Found a job. Job came through, got a software development job. And I thought about it a bit and I said, let's, let's earn some money. Let's go, you know, maybe I should just go just kind of see a little bit and do this, do this software development thing for a while. So I did that for two years.
And had this interesting experience. The company was acquired. I got laid off. It was hard, but it was a growth opportunity. And I said, okay, well, maybe let's go. I've done this now. I don't want to do software development anymore. Let's do this human computer action thing, this usability thing, this UX thing. So I reapplied to the same school, got accepted again.
And then about two to three weeks before I had to like make a firm commitment about what I was going to do, I applied to a UX job in a lark. I just said, I was just kind of looking at monster.com. That's right. I'm dating myself, monster.com. And I said, uh, Hey, you know, Hey, there's a job here. Let's just apply to it. Let's just see. And turns out I got it. I got the job. I declined school again, and they're going to take me a third time.
And what happened was I worked for that company for 14 years and I was a UX team of one. It was a pretty low maturity environment. I was basically the first UX designer they had ever hired. And I had in that kind of journey, I had this mind blowing experience of watching a product go from a sketch that was made in a conference room that we're working together and there's a sketch
to transform, to be built, to become the company's best-selling product, to be working and evolving it and growing it over many years, to becoming a manager, to managing the people who are then taking responsibility of it and redesigning it and actually seeing my work go from sketch to trash can in a manner of speaking. Yeah.
And I saw it all. Like I was there to witness the entire thing. And it was just this remarkable firsthand experience of a product's life cycle. And then apply to a job at NG. Surprised they got it, got it. And here I am. What an adventure. What a journey. And I can only imagine the challenge that it would be to figure out, am I ready to
or do I want to work versus do I want to deep dive into this world of education and build up that confidence that way? But it seems like you got kind of baptized by fire, if you will, and went straight into the workforce. How was that? It was toasty. It was difficult. What I didn't appreciate at the time, the great thing about junior young people in their career, they don't know how hard stuff is.
So they just endeavor. They've never been burned before by a failed project or a shutdown project. They just kind of lean into it and just go. And so I had that going for me. I didn't realize what was maybe difficult or what to expect. But I came around a lot later. It's really hard not to have support structures, not to have other experts to lean on, like researchers or visual designers.
Not having a mentor, not having a manager as well versed in kind of UX practices. It's kind of a lonely existence for being a team of one. You're kind of just sometimes you're screaming into the void and it's formidable. It is definitely character building. Yeah.
Yeah. So it's awesome and also very admirable to see your progression over the course of those 14 years, being this novice in the field to now being an expert mentoring others. And that mentorship, I know, is something that you're very passionate about. Certainly, you've published a lot of articles about it. And
Thinking about mentoring new designers or new researchers or just people who are new to the UX field in general, I often think of the fact that once you've learned or you've identified yourself as, yes, I am a UX professional, it's like you have this eagle eye for spotting usability issues or like, oh, this could be designed better. So in thinking about these newer designers, if you were to take yourself back in time, you know,
So there's probably some things that you want to do to evaluate a design to make sure it's usable. And I know there's a couple of methods that allow us to do this. We often call them inspection methods or discount inspection methods. And I know a lot of folks, especially a team of one, maybe limited budget, trying to do this for the first time, will often turn to these methods because it's, ah, yes, I can do an analysis, like a heuristic evaluation and figure
figure out, is this product going to be worth the time? Or, you know, are there things we can adjust to make it better? Could you share a little bit about what these inspection methods even are and, you know, whether or not they're worth doing or, you know, should we default to research instead as a way to answer those questions?
Absolutely. So many years ago, in the usability kind of UX field back then, there was a really high bar, right? We had to do a lot of quantitative analysis, empirical studies, rigorous stuff.
made really lengthy reports. And so there was a craft to it. It was expensive, right? And then in the 1990s, Jacob Nielsen, Robert Mack, Rolf Mollick, and some others, they started to popularize this idea of discount usability, of using these methods that didn't require the recruiting and the effort and cost involved in getting a lot of these people and creating these kind of involved studies to say, look,
You can learn quite a bit by using some of these methods. Cognitive walkthrough falls into this category, but expert and heuristic evaluations fall into it as well, where you do an inspection based on heuristics. You do it on your cumulative experience, watching people use experiences. And by doing that, you can actually detect and determine at least some, at least some areas that need some improvement for a lot less cost and a lot less time.
And that is a great way of actually getting a clue, a little bit of signal like, hey, maybe over here, or maybe this is a part that we could start to improve, investigate using some of the other methods that are at our disposal. And that's the one thing we talk a lot about is triangulation, right?
There's no one technique to rule them all. You kind of have to use a combination to really kind of understand what's happening with an experience or challenges therein. This is just another one, one of the other tools in the kit to use really easily and cheaply to give you that kind of clue of, oh, okay, so now that we know this,
What other technique or whatever method we could use to kind of get a fuller picture of what's happening? Yeah, and I appreciate that term triangulation. And another metaphor I've used in the past at some of our courses is triangulation.
Thinking like a detective, like you never really go with one source and you're like, okay, that's what happened. That's the truth. You usually go to multiple sources and you pull together the facts to see how much of it aligns and how you can basically better form a sense of truth about what's happening. And
I do think at times heuristic evaluation is seen as a cure-all where it's, "Well, we can just evaluate it and ta-da, we've figured out all the usability issues." Certainly, it seems like they could be problematic, but I guess I'd love to hear, what do you think the pros and cons are of these types of methods? Yeah. With heuristic evaluation, you are using your subjective experience to identify some problematic areas
And you can do it fairly quickly. It's good for kind of utilizing these heuristics to kind of determine, hey, this area is seemingly against some best practices. This could cause potentially some of these issues. This is worthy of some additional investigation. You know, I actually want to take a moment. I want to actually kind of tease these two ideas apart because a lot of the language around these techniques is,
is kind of messy. So people kind of say heuristic evaluation, they're actually doing an expert review or an expert evaluation. So I just want to tease these parts. So with heuristic evaluation, you should have a set of heuristics, right? You should have things like if you follow these principles, you're generally going to have a better user experience, probably speaking. And so
Having some experience does help using that, but it is a better shortcut for people with slightly less work history to use their heuristic evaluation and actually using those heuristics to determine certain finding certain issues. The expert evaluation is a little bit different, right? You're not adhering to those heuristics.
you're kind of using your contact hours of actually going through, you know, many testing sessions, many studies, watching people use an experience and you're relying on that. And probably also maybe some domain knowledge. If you're a particular domain expert, like in something like financial services or healthcare, you spent a lot of time in there. That also kind of is a multiplier. It's a helpful advantage when you're doing these and you're using that in
In a very similar way, that's why it's kind of confusing, but you're using in a very similar way to say, I think this is a problem area. This is something that needs some investigation that could have this problem. So you really need to go to an expert evaluation level. You need experience and you need hopefully some domain experience to run those and do those really effectively. But whichever one you're doing, there's some commonality to both of them. So if you're going to use one of these techniques, if you're just trying to get
a clue as to maybe where to start some further investigation. You have to determine with your team, with your client, whatever the situation, high frequency, high important tasks, three to five, right? Determine that. You need hopefully some information about the users. Maybe there's some personas, ask around, right? Maybe you have some information to help you understand what they're doing.
You need a live working system. You're not doing these techniques with prototypes. You need to have the full breadth of interactivity of a working system. There's analytics. Great. That's also helpful. It gives you more data, more clues as to what people are doing or not doing that behavioral side of thing.
And what often is greatly misunderstood, multiple evaluators. It is better, not to say you could never use these techniques without multiple evaluators, but it is very helpful to have about three evaluators doing this kind of independently and getting together, determining the heuristics, determining the tasks, right? Kind of level setting, then doing the analysis and bringing it back and summarizing it. And so
That is something that is often kind of forgotten about or not talked a lot about, about the advantages of having multiple people do these techniques. Yeah. Yeah. So two major takeaways I'm hearing from this. First, these techniques are different in that one is based in a set of heuristics, if you will, a rubric. Often that rubric is the 10 usability heuristics that Jacob Nielsen coined. And so having that rubric, having that
that set of criteria helps to guide newer researchers. So certainly that makes it a more viable choice for someone with less experience in the field. Whereas the expert review, as you said, depends on having sat through multiple usability tests. Even if it's not that exact one, you've seen enough usability issues either with this system or other ones to have
have a sense of what is considered a best practice and also, you know, what might transcend usability, things like cognitive science or behavioral economics. But yeah, that last point too, the multiple evaluators certainly seems like a really important criteria in order to actually get more objective information. So is the multiple evaluators more to divide up the work or more to have multiple perspectives of the same thing?
the same insights, if you will. It's definitely the multiple perspectives because one evaluator is only going to find so much. And it's going to be a little dependent on how they know the space, years of experience, practice, you know, using the technique. So you're really not dividing up the work per se, because you're doing it in parallel. So you're really getting multiple perspectives to get a more well-rounded approach to the final results.
and then sharing that out and having a more accurate and more deeper perspective of what the experience may be like. That makes sense. Yeah, thinking of multiple perspectives, I'm sure helps you get multiple angles of the same problem. And otherwise, if one person were to try to tackle maybe this particular workflow, if they were the only one to have eyes on that, then it might be a bit challenging to really fully analyze the usability challenges that might be there.
Yeah. So on the topic of these insights that you now have after doing one of these evaluations or one of these inspection methods, how do you present these to people? Do you just present the rubric or do you have other advice for how to share this knowledge with others? Okay, let's break this down here. So it's commonly a presentation. So you probably have a keynote, you have a PowerPoint,
And always send it out before the meeting. Give people a few days to process the content. They're probably not going to read it. But you want to give them the opportunity. You want to say, I'm being transparent. Here you go. Process it. Maybe they'll have a couple of better questions than if you hadn't done that.
And so I generally found as a UX professional, what I advise other people, we don't like surprises unless we're doing testing where there's always surprises. But anytime outside that setting, surprises are generally bad and can lead to some really unproductive meetings. So send it out a few days beforehand. So let's get into the actual content itself. So you should have an executive summary.
And here's a kind of a rule of thumb that I've learned. 10% of your slides should be the executive summary. So if you have 30 slides, you know, 25 ish findings or something like that. Three summary slides. All right. Talk about strengths. This is sometimes overlooked in conversation. What are things if you're doing a heuristic one, what are things that are really adhering to that, really promoting those heuristics? Did you notice anything like that when you were doing the analysis?
Highlight a few of those. Do you notice things that are recurring problems that are kind of pervasive throughout the user experience? Summarize and highlight that stuff too. I actually, I personally like to save that for the end after I've actually documented and put out all the findings and then say, okay, aggregate this. How can I summarize this stuff? And then save the writing of the executive summary for last in this process.
And then save a lot of time for Q&A. So if you got 60 minutes to do this readout for the team or some clients, expect you're going to talk for about 10 minutes. And then it's going to be a lot of Q&A afterwards. It's going to go a lot of different places. So don't expect you're going to monologue for 60 minutes when you do one of these. That's really helpful advice. And I know certainly when I was...
a little bit, I don't want to say younger, but earlier in my career, I would get really ambitious, like, oh, I'm going to document all these findings. I'm going to share all these findings. And then I would maybe get through half of them and I would realize, oh no, I only have 15 minutes left and we still have a lot to go. So certainly planning for
presenting less but having time for Q&A, certainly more than just five minutes at the end because you're going to need a lot more than that. Yeah. So just prepare yourself for that. That may happen. So let's think about the findings themselves. So we're going to use our mind palace to think about a slide here because we're talking about it. So in that kind of the finding section of your heuristic evaluation,
organize them by task. So you had those tasks as important tasks, right? And on each individual slide, each finding what we want is a title written in a complete sentence. Okay. Force yourself. How can I summarize, you know, what's happening here, right? In a complete sense, I promise you, it's going to be a little hard. If you, if you don't have a lot of practice with that, we want to drive and communicate really effectively. That's going to really help.
We want a big image, like 70 percent of the slide should be an image of the experience. The nice thing about that is you can provide a lot of visual context,
It also takes away most of the slide space. So you can't jam a bunch of bulleted points and write a bunch of stuff that people are not going to necessarily pay as well attention to. And so we have a lot of space that we can annotate and we can kind of show, see this here. This could be an issue. Here's my rationale. Here's my analysis. So that's going to leave only 30% of the slide for us to write stuff in. Yeah. On the top of the slide, we want severity.
We want to have a mechanism that says this transcends and is probably a major issue in the experience. This could maybe lead to people stopping their work, not completing the workflow. That could range from low, medium, to high.
I recommend two other ones I'll talk about in just a second. But what you want to do, especially if you have other evaluators, which is really great if you can do that, is use a number system, give numbers to it and find a way to average those numbers. And so you'll actually have kind of a synthesis of where the evaluators are saying that, you know, mutual agreement. This is a high severity task blocker type kind of finding in this experience. And so have that just like a really bright, saturated, hey,
Big problem here. Like it should look really stand out on the slide. Like you squint, you should see that, that status for the finder. Yeah.
I think that serves two purposes. One, it forces you as you're doing your analysis to evaluate and also to prioritize certain issues as you're writing your report out later. But the second is it also forces you, if you do have an inability to say good, which I know when I've talked to a lot of researchers in the space, it's very easy to
to focus on the negative, like negativity bias, you know, is real even in research. So, you know, forcing yourself to really consider, is there something particularly effective here that I'm just overlooking because the design is simply good and it's kind of blending into the background and being able to be objective is so important as well. So I just remembered your point earlier about trying to notice the good as well.
Because if you come into a meeting, talking about no surprises, and all of a sudden you're just saying, everything is terrible. It's on fire. We can't use any of it. And you offer no good advice, then you kind of become the grim reaper of your organization. You're a UX sourpuss. That's what... I probably haven't called that to my face to my recollection, but probably behind my back maybe once or twice. So you don't want to be the UX sourpuss who...
This is especially important for embedded teams. If you're a consultant,
It's a little whatever because you're just you're given the insights, you know, hope they use them. You have a good rapport, but you're not going to be there when the hard work kind of really digs into let's start fixing things. So you want to have good relationships, good rapport with those engineers, with those product owners, whoever they may be. So in this kind of environment, keep mindfulness around what's good and document a few of those things. You want to sprinkle that throughout the report.
You want to go heavy on, you want to organize by severity, of course, but sprinkle some positivity in there as well. And you may find that that may help people accept some of the findings with a little more grace because you're also accentuating and calling out parts that are in service of some of these usability heuristics. Yeah, absolutely. So there's the label. And what about things like
bugs? Do you call that low severity, high severity? Because sometimes they're bad, but maybe they weren't intentional. Do you have thoughts on how to address those? Yeah, when it comes to bugs, I definitely recommend having just a separate category when it's something that is clearly broken, clearly a defect that no person would rationally expect this to happen or some sort of weird browser error is triggering.
I definitely recommend having that listed as a bug and calling it a spade a spade. Because yes, it could have some severe issues to the user experience. It could be very impactful. It could be very detrimental. But
If you want to have that kind of relationship with your engineering colleagues, say, "Hey, look, we noticed there are some bugs with this experience. Let's get them into the tracking system. Let's get them into the defect tracking system." And it's just a nice way to communicate, "I know you didn't intend it to work this way, and I'm not going to get caught up and hung up on rating that a severe issue." Yeah, yeah, for sure.
So a couple of thoughts, questions I have. Now you have this 70% of your presentation, or at least the screen that you're presenting is an image and you have some sort of finding. So how would you like describe that? Like, how would you kind of like round out that slide?
So we have three major sections in this small space. We don't have a lot of space to work with here. So we have to really judiciously carve up some sentences to these different areas. And so the first one is just describing what happened. If you're using the experience, you're clicking through it, what did you experience? What is driving you to say this is in violation of this principle or your past work history?
And so you want to have two or three sentences just describing what's going on here. So you're giving context. Then you want to give two or three sentences using some design analysis. Why might this be happening? What's contributing to this finding? And then you want to have two or three sentences. This is often overlooked. Give a recommendation. Give a suggestion of how might this be resolved or the tension be eased or better alignment with the principle.
And this is a key thing to understand. They're not going to take it on face value. They're not going to just, okay, sure. We will do that because there has to be discussions on feasibility. There, there has to be further discussion with other people on the team that has maybe not been done yet. The goal is just to incite the conversation and to give people awareness that change is possible. We can do something to alleviate the harm here, but we,
Don't mistake that if they don't take all the recommendations on face value that somehow you have failed, but you do want to have them there anyway to promote healthy conversation. Yeah. It's a lot like that movie inception. You're like planting the seed. Even if it's not something that gets decided right away, it might eventually lead to conversations that even if it doesn't happen right away, it gets put in a list of tasks to actually accomplish. It becomes something that gets implemented.
UX is all about soft power, right? We're not at the helm. We don't have the controls in front of us. It's all about influence and soft power. I learned that a little too late in my career, to be fair.
Yeah, no, same here. And it's a lot of delicate work with human beings and ensuring that they feel seen and heard, but also that you're giving them something to think about that empowers them to make positive changes as well. So yeah, I guess on the topic of, you know, anything else you should
present to people other than this deck? Because certainly if memory serves me well, as far as what my experiences were like with presentations like these, there were always so many more questions than I could address meaningfully in a one hour or hour and a half meeting. Are there other things that you might want to include, even if it's not in the presentation, maybe in other documents?
Well, you could also include a potential prototype or I'd say a sketch. I don't make it too complicated. You could use a sketch as well to kind of showcase or illustrate some ideas. You could...
put an agenda item at the end, reserve time to talk about, do we need a follow-up? Do we need to actually discuss? And that's something that often is overlooked when we talk about scheduling these is the importance of having that agenda and keeping time. Now you may lose track of that time in the 40 minutes where people are asking lots of different questions and things are going different ways, but you also want to be kind of documenting maybe some action items that are coming out of this presentation.
So that you can follow up. And this is an important thing is you want to connect the dots, especially for in-house teams. You don't want to give one of these and say, well, good job, everyone pat yourselves on the back. That was, that was really insightful. We have some new awareness of problems or issues with our experience and some maybe good stuff too. You want to think what is the next dot?
And so you want to talk about who could pick up some items. Are we going to get these into the Jira system or bug tracking system, whatever you use to track that stuff? Do we have a follow up session? Who's going to be there? When can we have it?
Always have people's names, what we're going to do, a little bit of a deadline or some timetable associated with that as well, because you want to keep the dots going. You want to make the full picture of actually making this experience better. And sometimes we just deliver it and we consider it done. Yeah. Yeah.
That deadline is often like forgotten about. I don't want to necessarily say overlooked, but by not having a deadline, it becomes this thing that just needs to happen at some point. And then sometimes that some point never comes. So 100% agree, having a set of next steps and just being
even if there are questions, being able to say, I will get those answers for you or someone else on the team has the ability to get those answers so that actual recommendations can get implemented.
So I guess to wrap up this concept, it's a very powerful tool, it seems like, to be able to do a heuristic evaluation or an expert review, to have multiple evaluators, and to be able to present something that enables the team or empowers the team to make the design better, maybe even before putting it in front of a person to test, a user to test.
Could you give people advice on things to avoid? I know you mentioned a couple of pitfalls earlier, but what are some mistakes that teams should be cognizant of in order to make sure that teams are doing well in evaluating a design? Number one, forgetting the audience. And this is key. I'm not talking about the users. I'm not talking the people who may be using this experience. That's important. And I think we reinforce that a lot.
Why did they ask for this in the first place? Your immediate collaborators, your stakeholders, why what's driving them? Where is the pain coming from? A lot of times we don't actually think really critically about what their needs are. What are they worried about? What are they, how are they measured in your organization? And so get really attuned to that pain before you even start.
Before you embark on this is we're really concerned about this, or there's some competitor pressure, or we've heard a lot of feedback that this part of our support workflow is problematic or not helpful. Whatever the case may be, get a good grounding as to why this is being sanctioned to be performed in the first place. And then use that to frame your communication, frame your deliverables around how it is affecting that. And you're going to be a lot more persuasive.
It's funny, the thought of empathy. We're very good. Well, generally speaking, as UX professionals, we're very good at empathizing with users because that's our primary job. But it's often not something we think about right away as far as empathizing with colleagues and what they are saying.
about or what is really important to the completion of their job, be it someone on the development team or someone on sales or someone in marketing who will ultimately be affected by certain design decisions. So that point about remembering your audience is one that resonates not just for these types of heuristic evaluations and expert review presentations, but for what
really presenting any information in general that you gather from these types of usability improvement efforts? You know, our users don't necessarily wrong us in terms of our professional practice, right? So it's natural to have kind of a more gracious and more positive view of our users. It's those difficult conversations we have with our business types or our engineering types where we're let down, where we're disappointed.
There's misunderstandings, there's miscommunication, and that can fester and really kind of pull away from our ability to be influential and have some of these techniques like these heuristic evaluations actually resonate and be more successful and lead to more change and improvement in our products user experiences because maybe the relationships are weak and we haven't invested in nurturing them and improving them.
Yeah. Well, this was really a fun conversation for me because I get to think about my days when I was a consultant, but also I know it'll help many others, whether you're in consulting or not. This is a constant ongoing responsibility for us to be able to speak to what we learn and to vouch for why certain design decisions are made. So I really appreciate you talking about this concept. And I guess it's
If you were to point people to certain resources to learn more about these types of inspection methods, are there places you would point people to? I know a website they could go to, which has a lot of information about discount usability and heuristic evaluations. NMG's website has a lot of content, has those
a lot of detail actually on the heuristics themselves and a lot of great examples of their impact in real life. And so I definitely recommend checking out some articles there that have a lot of great detail on the different heuristics and what makes good usable experiences. And if people want to follow you and your work, is there a place they could look social media wise or otherwise?
You can look me up on LinkedIn. That's one of the few social media sites that I use with some regularity. So check me out there. Connect with me. Always happy to. I connect with people who take our courses all the time. So look me up there. All right. Well, Evan, it has been an absolute pleasure. So thank you for your time and hope you have a great rest of your day. You too, Therese. Thanks.
That was Evan Sundwall. If you want to learn more about him and his work, or just want to learn about UX in general, a great place to go is our website, where you can find thousands of articles, videos, and reports about UX design, research, strategy, and even UX careers. We also have some upcoming training opportunities, so courses that you can take full day and half day format. That'll be in February, March, and April.
All that information can be found on our website, www.nngroup.com. That's N-N-G-R-O-U-P.com. On that note, if you want to stay up to date on our latest research or publications that we put out, we do have a weekly email newsletter, which features all of those publications and upcoming online courses. And if you enjoy this show in particular, please follow or subscribe on your podcast platform of choice.
This show is hosted and produced by me, Therese Fessenden. All editing and post-production is by Jonas Zellner. Music is by Tiny Music and Dresden the Flamingo. That's it for today's show. Until next time, remember, keep it simple.