cover of episode 33. Tracking UX Progress with Metrics (feat. Dr. John Pagonis, UXMC, Qualitative and Quantitative Researcher)

33. Tracking UX Progress with Metrics (feat. Dr. John Pagonis, UXMC, Qualitative and Quantitative Researcher)

2023/12/4
logo of podcast NN/g UX Podcast

NN/g UX Podcast

AI Deep Dive AI Chapters Transcript
People
J
John Pagonis
T
Therese Fessenden
Topics
John Pagonis:将UX研究融入企业,旨在改变组织,使其利用UX研究证据改进产品,提升用户体验。UX指标有助于组织决策和改进,提升产品ROI,改进团队和产品,最终改善用户生活。引入UX衡量需要回答一些关键问题:现状如何?工作是否足够好?与其他系统相比如何?如何找到改进方向的证据?许多团队缺乏闭环反馈机制,无法证明其工作价值,导致难以争取预算。UX研究应帮助产品管理做出明智的决策,回答改进方向和预算分配等问题。对指标和衡量的恐惧源于缺乏了解,需要通过教育来克服。需要结合定性和定量研究,从小规模实验开始,展示数据价值,并通过叙事方式解释数据。将定量数据转化为叙事,有助于让利益相关者理解并参与其中。定量数据提供信号和基准,帮助提出更多问题,这些问题通常需要通过定性研究来解答。一些UX意见领袖对UX衡量和定量UX的负面评价,可能对社区产生负面影响。定性和定量研究都不可或缺,结合使用才能更好地理解用户体验。仅使用定量研究的组织,其成熟度通常较低,因为他们可能操纵数据以得出期望的结果。定性研究能力是做好定量UX研究的基础。在定性研究中,提问技巧至关重要,因为这直接影响数据质量。定量数据分析的关键在于讲故事,将数据转化为可理解的叙事。聚类分析是定性和定量研究中都通用的重要技能。定性研究能力是做好定量UX研究的关键,因为只有理解用户行为才能有效地进行测量。衡量有用性是UX研究的基础,因为它包含了可用性和效用两个方面。要有效地进行测量,需要了解用户的任务分析、目标和需求,而这需要定性研究的知识。定性和定量研究人员的合作,以及通才和专才的结合,对团队的成熟度至关重要。培养定性和定量研究能力,有助于提升团队和组织的成熟度。即使在成熟度低的组织中,也可以同时引入定性和定量研究,从而提升成熟度。定性和定量研究的结合,其益处并非线性叠加,而是呈指数级增长。闭环反馈机制不仅能改进团队内部工作,还能提升团队影响力,获得更多资源。从有用性入手,提升可用性和效用,从而改进整体用户体验。过度优化系统某一部分,可能导致系统整体崩溃。建立信任,并通过透明沟通和耐心引导,有助于推动组织变革。要耐心、透明地沟通研究过程和结果,并选择合适的时机进行挑战。 Therese Fessenden:许多团队在进行用户体验衡量时,存在两个常见错误:一是未定期评估现状;二是未进行后续评估以确定改进方向。定性和定量数据相互补充,定性数据解释定量数据的信号,反之亦然。成熟的组织同时使用定性和定量研究,而成熟度低的组织可能只使用定量研究或完全避免研究。仅依赖定量数据容易操纵数据,而定性研究(例如用户使用产品的视频)则更难操纵。衡量指标应涵盖结果指标、感知指标和描述性指标,避免过度关注单一指标。关注单一指标(如页面停留时间)可能无法反映实际情况,需要结合其他指标进行综合分析。需要根据具体情况,选择合适的指标进行优化,避免过度优化单一指标而影响整体效果。与业务决策者进行坦诚沟通,明确目标和约束条件,有助于更好地理解和改进设计工作。设计决策本质上也是业务决策,需要与业务目标保持一致。

Deep Dive

Chapters
Dr. John Pagonis discusses the importance of measuring UX to make informed design decisions, the role of quantitative data, and how organizations can improve by asking fundamental questions about their UX performance.

Shownotes Transcript

Translations:
中文

This is the Nielsen Norman Group UX Podcast. I'm Therese Fessenden. Over the past few episodes, we've been interviewing members of our UX Master Certified community to learn how they're applying key UX principles in their work. Today, we're featuring an interview we had with Dr. John Pagonis. John is a Principal Qualitative and Quantitative Researcher at Zanshin Labs in London.

He was one of the first to achieve a UX master certification with us, but has also given a number of talks on the importance of measuring and benchmarking UX work. In this episode, we talk about the role of quantitative data in making wise design decisions, the qual versus quant debate, and the impact that's having on UX work, both good and bad. And finally, how you can think more holistically about metrics to ensure you're actually moving in the right direction with design improvements.

In this episode, there are a few UX metrics mentioned, but not fully explained in the interest of brevity. But if you want to learn more about metrics that John mentions, you can find links to articles that fully explain all of these in the show notes. With that, here's Dr. John Pagonis. So John, welcome to our podcast.

Excited to have you here, excited to get to know you a little bit more, learn about how you got here and learn a bit more about what you've been doing lately. How are you doing? I'm happy to be here. I'm happy to be here. It's exciting to talk to you because you are amongst the earliest folks who have gotten our UX Master certification and it's fun to kind of check in with the folks who have been part of this program and have really taken it and run with it and owned it.

You got your UX master certification, is it 2018? Is that right? Yes, actually I was, I think, number 198 of the people that got the master certification. And since then, I've been working

as I used to do actually before that, in mostly enterprise environments. Some startups as well, but mostly enterprise environments. Big companies with thousands of people that have a lot of big systems that don't talk to each other. And serving a lot of people internally and serving a lot of people externally. So

I've been working with them mostly as well as some startups. The whole point of integrating UX research is that of introducing and transforming the organization so that they can actually make use of the evidence that we provide in UX research so that we can improve the product in people's lives.

I've been doing that and I've also been measuring or helping people or organizations measuring the user experience as well. And UX metrics or UX measurement, if you like, helps organizations make decisions and improve. And that's important for us practitioners and for the organization itself. It helps direct the product

prove the ROI of UX and a lot of practitioners inside the organization want that. And also helps improve the team and the product. And of course, the life of users. So a lot of the organization teams I've worked for wanted to figure out if they're doing a good job and how can they improve. So for example, when introducing UX measurement, ask a few questions.

Yeah, like it's a transformation project always. So I start with questions and some of the questions I ask is how good or bad is our world? Think about sports. I mean, you have to measure how fast you run, how much you lift so you can improve it. Yeah.

Have we done our job well enough? That's another important question to ask in a bigger organization because there are a lot of teams that seek budget and allocation of budget. So you have to prove how good of a job you're doing and you can do that by measuring UX. How do you compare against another system? Let's say your team is building System X. How do you compare against another team? And if you can prove it, then...

How can we find information or evidence to actually decide where to pay attention? So these are the fundamental questions I ask, which are not easy to answer. It's like easy in theory, right? Yes, exactly. So for so many teams, it's easy to ask these questions. And actually, higher management do ask that. They don't expect just a UXer to come in and ask these questions. They ask these questions, which are fundamental, but they're not easy to answer.

And I think I have ideas about why.

Back in the late 20s, beginning of 21, I had some time off because we had a new child. So I decided enough with nappies, I need to do something. So as a researcher, what do you do? You do research. So I started conducting research, qualitative research, yes, back then, with practitioners around the world. I believe there were 30 or 31 practitioners or something, where I was investigating their process.

or their perceived process, if you like, of how they work with other members and other teams in the organization, etc. And one of the things I discovered or I got ideas about is the fact that so many people, so many times, do not close the loop.

That's what the research indicated. And I'll explain in a moment what closing the loop means. And therefore, they cannot justify their existence and ask for budget in the organization. So many times they give up and quit. And by closing the loop, I mean we do UX work, research, design, we develop it, we deliver it, it goes out to users. And then what? There is no

feedback mechanism to tell us if we've done a good job, how good our job is, if we're moving in the right direction, what should we do next? Have we improved the user journeys? The loop stays open. I mean, I had this indication before conducting the research, but it seems that so many teams do not have a disciplined feedback mechanism and therefore so many times they get challenged and they have no answers.

So I ask questions, you know, it's my job to ask questions. Doesn't mean I have the answers, but, you know, I ask the questions. Yeah, absolutely. And that's, I think, the most important first step. And also just the awareness of the questions is important, right? Because I think to your point, there is a lot of interest in research and what customers are doing. What are they interested in doing? What are some of the goals we have?

And I often see like one of two mistakes. The first mistake being not taking a pulse of what's currently happening, right? To check, have we actually gotten better? Have we gotten worse? Right. They're just sort of like a blanket. Here's the metrics we have. It's just sort of like a, you're kind of picking through the toy basket and you're like, this is what we have. And we're not necessarily picking specific ones or tracking them over time. So that's one issue, right? We're not really being specific about

the pulse that we're taking before versus after. But then the second is not taking the pulse after, right? Where it's like, we designed it, we did it, it's done. And now what? Right now, how do we move forward? I mean, if we're, are we improving? Where should we focus next? You know, fundamental questions that product management actually asks. Yes. And I mean, the service

UX research brings to product management is that of helping them make informed and hopefully good decisions. That's our service. That's what we provide. So we should be able to help them answer this. Are we improving? Where should we allocate our budget? I mean, I've introduced

The answers, if you like, to this are a lot by using common measurement instruments, like, for example, UMUX Lite, the SUS, SEQ, satisfaction, stuff like that. But of course, teams can use whatever they need to use that brings them value, obviously. It's not always easy. And sometimes, depending on the maturity of the organization, you have a lot of pushback or fear of metrics or measurement costs.

Yeah. What do you think that fear is of metrics, of measurement? It has to do with not knowing. It goes back to education. You typically fear something that you don't understand and you try to avoid. So it has to do with education. People think just because you have to do basic statistics, because you have to understand the instrument, maybe how reliable it is. They say, no, I'm not going to do this.

So the freeze, because you can actually educate people about this. Nobody was born knowing everything. We need both qualitative and quantitative research. And you've got to start from somewhere. That's fine. What can you do? Education is paramount because I see people interested. They just don't know how. And that's where you can actually help and assist by introducing them and running small experiments and showing here's the value.

And here's the data. And this is what we can interpret from data. But the way interpretation is very important. I have found it very useful to interpret one data to narratives. You can actually use an instrument like uMOOC Slide, which measures utility and usability. And you can map it to adjectives. For example, the Microsoft adjectives list.

And then you can explain to stakeholders that this is how people characterize our product, stuff like that. Moving from quant to qual in terms of narration and giving a story is also very important. And that's how you get people in because the moment you can tell them a story, they're interested and in the nature of humans to actually

Try not to discredit, but to scrutinize it. And then you say, yeah, here's the data. And we came to this narrative from this kind of data. And they go like, ah, can we do this? Yes, we can. And therefore, you get them interested and involved.

Yeah. What comes to mind as soon as you mentioned that there is this concept that when you're sharing insights with somebody, you're sharing something you've learned, you can share as many numbers as you want, but those numbers are kind of meaningless. Even if they're the worst numbers you've ever heard, like you can relate it to awful things, like awful tragedies in the world, like war or, you know, like hunger and like all of these people are experiencing all these horrible things and these are the numbers and

And the numbers would be like, okay, okay, yeah, that sounds like it's not good. And then as soon as you tell the story about one of those people and you can bring it to life in a meaningful way, then it's like, oh my gosh, I need to do something right away. It becomes something that has more meaning. So yes, there's an importance there in tying those metrics to a narrative that actually has meaning.

meaning for people and that they can actually visualize whether that's for their own work or for future work that maybe they have yet to do. And there's actually two ways of visualizing this. One is what you just described. The other one is to actually show them charts with progress of how their product is doing as perceived by users.

and see, for example, that when we move from version 1 to version 2 to version 5.6, this is what happened. So if things are going badly, then you can go back and say, "Wait a minute, wait a minute. What did we introduce in that release that gave us this new score, which is statistically different to the previous score?" I'm not going to go into the details. So you can ask this question, say, "Okay, what did we do?

Let's say our utility, our functional adequacy, if you like, has deteriorated. Did we remove a feature? Let's say your usability went down. Did we introduce a feature that actually makes it harder for people to find or do task X? You can ask these questions and then you can introduce qualitative research to find the why. So you pick signals from the

from the data, the what happens, like your board shares behind you, and to actually discover the why. So quantitative or measurement of UX, if you like, gives us signals

and benchmarks to actually ask more questions, which typically gets answered with qualitative research. Yeah, I appreciate that point. And for those listening, I have a board back here that says, why is greater than what? And that's something that we often feel very strongly about at Nielsen Norm Group, because there's a signal, and then there's what causes that signal, right? But I really appreciate what you're saying as well, which is,

treating these metrics like they are signals and to follow the path, follow the rabbit hole and start to ask, use it as a jump off point to ask questions. And in a way, it's, I think, fascinating how qualitative data can often answer the questions of quantitative data and vice versa. You might have things that come up in qualitative data and then you have additional questions like, well, how often does that happen? Or

how significant is this issue? How severe is this issue? And like those sorts of questions, maybe you can sort of answer qualitatively, but ultimately you need numbers to figure out like the frequency or the magnitude, right? So in a way, fearing those numbers is sort of fearing the answers to questions and maybe reframing it as, well, we don't necessarily need perfect answers, right?

but it helps to have some answers, maybe even some additional questions as part of it. As always, yes. Right, absolutely. And I guess the other thing that is interesting too, like on the one hand, we're creating new knowledge, right? And so you kind of have to have an appetite for ambiguity or at least operating in ambiguity for a little bit. Ambiguity? Yeah.

What's ambiguity? What's that? No way. Yes. Right. There's a lot of living in this gray space while we wait for answers or while we look for answers. Right.

And that can often be an uncomfortable place to be. I say this myself, like it's an awful experience when you're there and you're like, I wish I knew the answer here. And I have hints of an answer, but we kind of need to do a bit more analysis. We need to dive really deeply. Now, I'm wondering, you know, as far as the work that you've done with other organizations, like you've mentioned a few different types of metrics people can use.

How do you think people are currently applying these metrics versus how ideally they should be applied? Do you think things are looking good or are things looking a little bit like they could be better? I do think, and I have observed, and you can see this in all corporate dashboards or startup dashboards, that people are generally interested in UX measurement. When applied right,

I've seen people staring at revelations, whether good or bad. So they're generally interested. Okay, that's the positive part of my answer. However, I have observed low maturity. And what do I mean by that? So many times metrics or measurement instruments, if you like,

are applied the wrong way. There's no good understanding of how to use the metrics, sorry, the instruments, what they are about, and how to deduce what you need to do from the data you get. So there's low maturity in terms of understanding the measurement instrument, where and how to apply it, and how to interpret it. The problem with that is

that people think it's magic and they need a consultant like myself to help them. The reality is that, yes, please do call me, but it's not hard with a bit of education. It is not hard. I mean, you can actually do a lot of good in your organization and your team by actually

doing well a bit of UX measurement for the reasons I actually discussed earlier. So I observed that as well, low maturity. And lately I've been seeing a lot of, how may I say this, UX influencers, thought leaders, publish a lot of negative things about UX measurement and Quant UX in general, which likely is negatively impacting

the community and therefore helping people close the loop because we need to close the loop. On top of that, there is the debate between Qual versus Quant, which is naive at least because you need both, obviously. So yeah, I guess these are the main observations. When you see teams using Quant and Qual research,

it is more likely that they're more mature than organizations that just use qualitative research. That's another observation about how they're applied. And they're definitely more mature than organizations that only use quant. Shall I repeat that? Because a lot of people avoid contact with humans by actually doing only quant UX and actually don't do the right thing anyway.

So that's another observation. A lot of people think that just by sending a survey out or doing a bit of measurement online, you're doing UX research. No, you're not. And probably your maturity is low. But that's my very biased opinionated view of the world. You know, I think you're onto something though with that, especially when it comes to what you just mentioned, right? So to kind of recap, you have...

qualitative and quantitative research. If you're doing both, chances are it's a fairly mature organization. And so like for us at Nielsen Norman Group, we have research on UX maturity, like this UX maturity model, which is the concept that you have some appetite for

you know, research, right? You're interested in studying people in order to develop some sort of product or technology or service or whatever that experience is, right? That aligns with people and how they actually behave in the real world, right? So the more you use that evidence, the more you rely on that evidence, and the more that you kind of use it as a core philosophy, the more mature your organization is versus if you make decisions in maybe a more

intuitive way or maybe even in a way that actively avoids doing research where there's actually hostility to doing research and it's seen as a waste of money, right? That would be considered something like low maturity, right? And the reason why there is low maturity and resistance is because if you only do data and you torture the data enough, they will tell you whatever you want to hear. So if you only have data,

You can massage the data, transform the data to tell you exactly what you need to hear. But if you have videos of people using the product and failing, then what do you do? It's a big difference there. Yeah, it's funny. So yeah, when thinking about the use of the data, right? So if the most mature organizations are using qualitative and quantitative data, they're using both the least mature organizations, if they're using data, maybe using quantitative data. But to your point,

They can maybe massage the data. What comes to mind, there's this quote from Mark Twain, and I'm going to paraphrase it here, but it's something along the lines of, there are three kinds of lies. There's lies, I think they say damned lies, and statistics. And so you can always kind of frame something as happening differently.

depending on how you use the statistic or how you massage it. Now, I think to your next point, if you use some qualitative research, it's a bit harder to argue that because you're seeing what is happening from the perspective of the person who's carrying out those tasks or whatever it is. You're starting to see these relationships between if this happens, then that also happens. So you have a little bit more maturity. But to also turn a blind eye to or to turn away from those metrics is

then you're also sort of purposefully avoiding, you know, the objective perspective. Because again, you can kind of massage qualitative data as well. It kind of depends where do you pan the camera or who's invited to the sessions, right? Protip.

Yeah, not trying to advocate the manipulation of data, but there can often be a little bit of bias there too. So it's not the most mature as if you were to combine both, right?

There is bias everywhere. There is bias in how you select the pool of users. There is bias in how you write the questions. There is bias always in the instrument, in your setup. There is bias everywhere. That's why you combine things. There is bias everywhere, but you need to know the bias. Otherwise, you cannot conduct research. And to your point about the quantum quality and the manipulation,

Okay, I've been helping organizations introduce UX research and I've done a lot of measurement work. However, most of my experience is actually in qualitative work. The fact that I can do statistics, write some code and analyze data is actually secondary because if you're not good in qualitative research, in my not so humble opinion, you cannot be good at quant.

UX research. There is no way. What skill have you found to be most helpful? You know, when it comes to dealing with qualitative research, what skill has been maybe the most helpful to make you a good quantitative researcher? What skill? Well, okay. We have the measurement and benchmarking stuff, and then you have the surveys. For the surveys, knowing how to ask questions is very important. Otherwise, you're getting the wrong data.

In terms of the stats or the analysis of data there, it's trying to figure out the narrative, trying to explain that as you would. So let's say you do thematic analysis and you summarize qual data. You're trying to tell a story. It's the same thing with data.

The same thing with signals you pick from analyzing thousands or tens of thousands of data points. You still have to tell a story. The storytelling is the most important part here. Then mapping it to something that people can actually consume and therefore make decisions. What is common in both is the ability to do clustering.

whether it's a structured textual data or numbers, the ability to cluster and classify is common in both, in my experience. And I don't think I would ever be any good at quant if I couldn't do qual very well. I don't think I would be able to, because I wouldn't be able to tell the story.

I think that's a crucial point. Yeah. And there's still going to be this element of categorization, right? Where you have many different kinds of metrics, right? And granted, you can't just...

I mean, you could, I guess. You could just throw whatever metrics you have and then see what changes. But that would be kind of overwhelming. It would be like me looking at 50 different sticky notes on my desk and thinking, okay, one of these sticky notes has the answer. Which one is it? Right? And you can leave them all out there that way. But if you can start to take a more systematic approach where you're thinking about the metrics that might tie best to the outcomes that you want.

or that might answer these questions more effectively than that. The ability to answer research questions, I feel like, and maybe that's what you're getting at here with this idea of clustering, right? We're starting to look for specific answers to these questions and finding the right instrument to measure

It's almost like picking the right tool for the right job, right? And it's not only that, but my favorite instrument in most of the cases is actually measuring usefulness because it's fundamental. As Jakob said, usefulness is utility and usability. However, where do you apply this? Any instrument, for example.

If you just pop up, let's say, let's say someone, let's say it's out in the wild and you know the personas, you know your selection bias and everything, and you pop up on a web, let's say a questionnaire and you ask about ease of use and utility, will you get the right answers?

We're going to get some answers. Will you be able to interpret them? I'm not really sure. However, if you've done your task analysis, you understand what the journey of the user is and what the goals are for that particular persona, then you can instrument your website to interrupt the user at the right time to get the right answer. How could you do that if you're not good at qualitative research, if you're not good at observing people, at conducting task analysis?

figure out goals and needs. You cannot. It's one thing to measure, but it's when to measure and where to measure and how to measure and with whom these things need you to understand the qual side of things more than anything else. Yeah, that's a great point. And I think that's what kind of gives you that high level understanding of

of what's happening as a whole, right? I mean, we can certainly, I think there's definitely benefit. And I think it's funny you mentioned Dr. Jacob Nielsen, his usefulness equation, right? Usefulness equals usability plus utility. Utility meaning like, does it have a purpose? But the other thing was this concept of specializing, which is kind of, he mentions this in a different talk. I'll include the link to...

to the keynote or the talk where he mentions this. But he talks about the concept of generalists versus specialists and how if you were to put a specialist in the Olympics, for example, if you think about those, I wanted to say it was like a decathlon or something where you have many different events. And to do decathlon, you have to be good at a lot of things. And then there's the individual events,

Within that, if you specialize in one of them, you'll probably win the gold medal versus the generalists who kind of know a little bit about everything. But there's sort of this trade-off at the same time. The specialist will probably be really terrible at other skills, not because it's a bad thing. That's just what they focused on and where their attention lies. And so depending on the team, maybe you have a lot of specialists, people who are really great at quantitative research and really great at qualitative research.

And they each are amazing at what they do. Ultimately, you do have to kind of combine that knowledge somehow. And I think that's where generalists can really shine is they may not have all the answers or all the tools. And they may even turn to the people who are specialists saying, tell me, you know, do I use a hammer for this? Do I use...

Do I use a wrench? Do I use a hex key? And then they can sort of tell you this is the best tool to use and this is why. And it's really that kind of marriage of these specialists and generalists together that kind of lead to this better outcome as far as team composition goes, right? That is why the in-fighting which is being bred sometimes by those influencers

I think is very bad for what lies ahead for a profession. I mean, again, you should not deter people from developing their skills and their capacity to actually do both qualitative and quantitative. You cannot be a specialist in everything, but you can probably survive more and go further if you're good enough at both. In most cases, I think there's more resilience.

And I think when people don't understand, as I said earlier, they fear and then they attack. And that's an indication of not understanding. And we have to help with education and doing more to actually explain why closing the loop is a good thing, etc. Yeah. And actually, to your point earlier about the influencers kind of advocating maybe against quantitative versus quantitative,

you know, thinking of them as sort of in conjunction with each other. I think part of that has to do with the pushback too is like you mentioned, you know, when you rely only on quantitative data, then you're missing a huge piece of the puzzle. So in a way, I think these can sometimes be taken out of the context, right? Which is the context that chances are if you're not doing qualitative research, it's because you are in a UX immature environment.

But it can be harmful. It's almost like taking coaching advice, right? Like I can get coaching advice from anybody, but if I get the coaching advice for somebody who's in a very different, you know, stage of life or whatever versus my team or me personally, then I might be applying the wrong prescription, right? The wrong antidote for the problem. Exactly, yes, yes.

So yeah, when thinking about more mature organizations, maybe the appetite for quantitative data is actually a good thing because that's going to help you get even further in your UX maturity than you may already be. So I think there's definitely something like a nugget of truth perhaps, but maybe for a specific audience. And that's often taken out of context, right? You can see and you can introduce both qual and quant at the same time.

and therefore accelerate the maturity of a team or organization. If you do the basics right, you can move further for longer by just doing the basics. So it's not like you cannot have both if your maturity is low and you're trying to improve. Recognize that you have low maturity and you can actually improve in both attributes at the same time if you have low maturity, as long as you understand that, of course. Yeah.

And that's the context. We understand that we are here. We need to go over there. What can we do? Ah, actually, we can do both. Right. And I think that's a really important point that you don't have to choose. You don't have to choose between these two. They can be proficiencies or competencies that you have.

improve, just like you might improve, I don't know, speaking skills. It's not like if I choose to improve speaking skills, then suddenly I'm giving up on visual design skills, right? It's just a different competency. It's a different way of allocating your time and your energy. And actually it compounds. It compounds exponentially.

how does it compound or what is like how does it how do they enhance each other so if you know both if you understand and practice both quality and quant research the benefits are exponential they're not linear you don't just do more and get more you do more and you get much much more and much much more because for example you close the loop if you have a feedback mechanism

then you can amplify something. It's systems, sorry, control theory. You need the feedback mechanisms to amplify what you're doing. So you amplify. You did something, let's say you did quality research, therefore you figure out that you designed the system in this way, you designed it, you benchmarked it,

or you just left out there in the wild, you measured it and then you found that, oh, you missed something, then therefore you do more quality research and then you improve it more and more and more.

It's just, and then not only that, that's internal to the team. Now more amplification. Hey, boss, we're doing this well. And our users tell us this and so on. Oh, excellent guys. Continue doing this. You have my trust, assuming, or you always have my trust, but anyway, continue down this road.

Oh, how about we do this extra? Oh, we need more budget. How are we going to get more budget? Oh, we're doing well and we can prove it. Oh, we can prove it. Let me go ask for more budget. More budget? Oh, let's do more good work. And you help everyone in the organization. You help the product. You help the user. Everyone's life is better. That's amplification. Yeah.

Got it. Yeah. So basically it's a feedback mechanism, not only for the immediate team, but it serves as an amplification measure for the team to get more resources, to help other teams, to kind of expand the influence of the work that's being done. Yeah. What also comes to mind too, and I think kind of speaks to the importance of your

your choice in instruments, right? As I think you mentioned measuring the right, like, how do you know when to measure, what to measure? Obviously then we have, we have lots of classes for that and we're certainly not going to make this an academic lecture here, but I do think it's, it's worth mentioning, you know, and I was just doing a course with a client yesterday related to measuring the impact of work

you know, or proposing certain things? How do you benchmark? And what comes to mind for me is

You have things like outcome metrics, perception metrics, and descriptive metrics. And these are all really helpful to think about because if you measure only one thing, then you're technically only changing one thing, right? So let's just say we're improving outcomes. Or actually, the example I often like to rely on is one about descriptive metrics. So something descriptive like

how long does it take someone to do a task, right? That's something we can observe. We can literally set a timer and see it. And then if we measure that time and we say, okay, well, we want to incentivize our ability to decrease that time on task. So it could be something even like a phone call that someone makes to a call center, right? We want to decrease the call time. And so we might say, okay, call center employees, we're going to give you a bonus. We're going to incentivize this

you know, see if you can decrease call time. Now the intended outcome might be that people get their calls resolved more quickly, but because it's the time that we're incentivizing, it may actually not

improve the resolution rate, right? It might actually make it work because now people are like, "I'm gonna transfer you now. Please hold." Short call time, but no good resolution. I think we're getting a different discussion here, but for example, it's a silly example, but to get the point across. How long people stay on a page, on a web page? I know it's silly, yeah?

Oh, we have huge engagement. People stay on our page a lot of time. Well, yeah, but is this good or bad? Is it because they found or they didn't find what they're looking for? You know, just improving one metric needs to have a counterbalance, typically, to figure out if you're moving towards the right direction.

Okay, that's system level thinking and improvement, but that's one of the main things you need to do as you advance down that path. And that's why I always start with usefulness.

Because it's so fundamental. If you improve usability and utility, you will improve the whole thing. That's the first one to introduce. Right. And I think those are, again, they're not easy questions to answer, but they're important. They're fundamental ones. So in a way, when you think about usability and utility, there's two parts. There's the perception, right? How useful is it?

Is it something that people want to use? Okay, that's one thing. Do I want to pay taxes? Right? No, but I have to, right? But then, you know, so then you kind of have to counterbalance. It's like, there are going to be certain goals, certain perceptions, certain things where you may want to improve perception, but it may be more...

practical or beneficial to improve something like the maybe the descriptive or outcome-based metrics, right? And so you can kind of use these as like levers, right? As ways to kind of ensure that you're working toward whatever good means in your context, right? Yeah. If you try to optimize, over-optimize for one thing, let's say you have a car. Let's say you have a car, yeah? And you build the best engine in the car, the fastest engine.

And it becomes so powerful that you cannot steer it. Therefore, you crash. Wouldn't be better if you have optimized, not optimized, but if you have designed the whole car so that it doesn't have the fastest engine, but it turns. So you can take a fast turn or it can actually break. It's important. Over-optimizing only one part of the chain of the system, it's going to make the system break. I appreciate the metaphor because...

That metaphor, I feel like really resonates in the sense that often when we're designing something, right? Whether we are the designer or we're the researcher, I use the term we design loosely. But as a team, we are making things. And we might have different KPIs or key performance indicators, like you were saying. And depending on how we talk through those and who sets them, and maybe we're not in charge of setting them, but maybe...

we can help to make sure that we're moving in the right direction as intended, right? Then that's a really important conversation to have. So that way, if we do fall short of certain objectives, then we have a reason for it. It's not just, well, I don't know, we didn't measure it or that's not something we tried to do, but it's, hey, we did this to the best of our ability given these constraints and given these outcomes that we want to achieve, right? So I think having those

Frank conversations, even though they're hard ones, can really help shed light on the work that we're doing and demystify it so that it's not just, oh, design's just doing design things. Right. But rather, we're making these decisions with these other business decision makers. So it's a tough balance. Right. Because ultimately, a design decision is a business decision. That's kind of what it is.

And that can often be a bit, you know, you can kind of butt heads sometimes when you make recommendations that may challenge the way that we run our business or the way that we work.

So I think you're right. It's important to close that loop. May challenge. May, may. I'm just, I'm being optimistic here. Absolutely. And I think that that's a great place to kind of wrap up and kind of give people food for thought as far as, you know, what they're going to do next to transform their organization. So trust, I agree. Trust is absolutely paramount to doing any other work we do and also giving people a chance to participate in that trust as well. I think there can often be a bit of...

isolation in the research work we do because either it's like, "Ah, no one's interested," or like, "I'll just do it to be a good citizen to my team." But sometimes just doing it ultimately keeps people out and doesn't give people the chance to, as my colleague Tanner Kohler often puts it, put their fingerprints on it. And when people put their fingerprints on something, they feel more invested, they feel more engaged, more interested in what ultimately happens. So

So, you know, while we can certainly offset the work, I think there's a way to build relationships by giving people a chance to look at it and have a say. And over-communicate it. I cannot stress this enough. You have to be patient. You have to be transparent and over-communicate the research that happens. You need to run experiments, communicate the results of the experiments and be really, really, really patient because you're going to challenge people.

And obviously, don't be heroes. Sometimes people just don't want to listen. You cannot save the world. Life's too short. Move on. Yeah, that's my advice. Pick your battles. Yeah, pick your battles and be patient. I think that's a really good way to look at it. And close the loop. Do not forget that. And close the loop. Don't forget to close the loop. That was Dr. John Pogonis. You can find links to his LinkedIn in the show notes.

By the way, there are plenty of quantitative research and metrics-related resources all available for free at our website. And if you want to stay up to date on the latest articles that we publish, we do have a weekly email newsletter. So sign up for that and you'll learn about all of our articles, videos, and upcoming online courses. You can find everything I just mentioned at www.nngroup.com.

And of course, if you enjoy this show in particular, please follow or subscribe on the podcast platform of your choice. This show is hosted and produced by me, Therese Fessenden. All editing and post-production is by Jonas Zellner. Music is by Tiny Music and Dressed in the Flamingo. That's it for today's show. Until next time, remember, keep it simple.