cover of episode 6. Ethics in UX (feat. Maria Rosala, UX Specialist at NN/g)

6. Ethics in UX (feat. Maria Rosala, UX Specialist at NN/g)

2021/1/1
logo of podcast NN/g UX Podcast

NN/g UX Podcast

AI Deep Dive AI Chapters Transcript
People
M
Maria Rosala
T
Therese Fessenden
Topics
Maria Rosala:用户体验设计中包含许多伦理考量,研究伦理关注参与研究人员的福利,设计伦理关注产品用户的福利。两者都基于尊重、不伤害和正义等核心原则。设计中需要考虑潜在负面影响,例如成瘾、边缘化、压力过大等。公平正义原则要求确保所有用户都能平等地体验产品或服务的益处和负担。在设计过程中,需要考虑最坏的结果,以及设计如何被滥用,从而避免潜在的负面后果。需要将用户纳入设计过程,进行参与式设计或协同设计,特别是存在伦理问题的场景。关注弱势群体,将他们纳入研究,并考虑设计对他们的影响,可以避免他们受到负面影响。 许多团队过度依赖定量数据,忽略了定性用户研究,这会导致设计缺乏长远眼光。设计师应该认识到自己并非用户,需要在设计过程中纳入多元化的用户群体,避免出现偏见。 Therese Fessenden:用户中心化设计并不一定意味着符合伦理。公司可以通过操纵用户需求来实现不道德的设计,例如收集和使用用户数据,而用户对此并不了解。社交媒体等平台的设计,虽然最初带来了很多好处,但也带来了许多无法预料的负面后果,例如审查、欺凌和虚假信息传播。许多设计面临的伦理挑战是系统性问题,设计本身只是反映了现有系统的问题。设计既可以强化现有系统,也可以促进公平,设计师需要保持长远的眼光,认识到每个设计元素都有可能产生积极或消极的影响。

Deep Dive

Chapters
Maria Rosala discusses the distinctions between research ethics and design ethics, highlighting how both disciplines are underpinned by central concepts such as respect for persons, doing no harm, and ensuring justice, but apply these principles in different contexts.

Shownotes Transcript

Translations:
中文

This is the Nielsen Norman Group UX Podcast. Happy New Year. I'm your host, Therese Fessenden. As with every other January 1st, many of us are reflecting on the year before, perhaps relishing any ounce of distance we can put between ourselves and 2020. But New Year's Day also gives us the opportunity to do more than put the past behind us. It's about new beginnings, new habits, and new resolutions to be better versions of ourselves.

UX is an industry that's centered on iterative improvement and finding opportunities to grow. So I thought it might be fitting to start this year by doing just that.

I'm excited to share an interview with one of my coworkers. My name is Maria Rizzola and I'm a user experience specialist at Nielsen Norman Group. I had the opportunity to sit in on Maria's course, user interviews, which is a lot of fun. I mean, as someone who does interviews on a regular basis, it's always kind of nice to reflect on some of the practices.

She teaches a number of classes with us. I teach five classes on various different topics, mostly UX research methods. But she has a particularly relevant interest. Ethics in particular, research ethics. So we discussed how being mindful of ethics can help us not just be good people, but can help improve the world around us with better, more human-centered designs.

In that class, you covered something really fascinating and really important, which is when you're doing user research, there are lots of considerations, ethical considerations that you need to factor into your process. Yeah, there's a whole field of ethics which looks specifically at how do we protect people's interests when participating in research and

So we do cover a little bit about that in the user interviews class. Of course, not in a lot of detail. And I do remember, I think it was last year, wasn't it? We filmed a online seminar on this topic on research ethics and it, it

It is one of the things that I'm fairly passionate about. I have an interest in and a specialism in specifically research ethics. And I guess one thing that we can talk about today is how does that differ from design ethics? Because there are some similarities and of course some differences. But yeah, we do cover a little bit about that in the user interviews class, not a lot, mainly around collecting informed consent, thinking about researching with people who might be vulnerable,

or researching sensitive topics that could cause people to get upset and how do we manage that and ensure we minimize risk. So that's really what research ethics is about, thinking about the welfare of people who take part in research activities and how do we report research afterwards as well and make sure that it doesn't do any harm to the people who've volunteered their time to take part in our research. Yeah, so we have...

a whole one-hour seminar on research ethics, which I highly recommend as not just foundational for folks who are new to the research realm, but it was a great opportunity to learn, even as someone who's been doing this for years, learning how to handle some of these very unique cases, maybe sensitive topics, things like that. It was really, it was a good class. So I guess on that note, you mentioned that

Design ethics and research ethics are slightly different. So, you know, I understand with research ethics, you're gathering data, you're learning about users. And in that process, in the process of gathering data, you want to ensure that you're protecting that user. So, yeah.

How is that different or what would be different when considering design ethics? And what do you think of ethics as a whole? Like what even is ethics? I realize that's a very philosophical question, but I think it's important to really drill down. What do we mean when we're saying that we're being ethical in our practices? Are we

making a judgment call on the behalf of the user or are we allowing our user to make that call? What do you think of this? Yeah, those are some really good questions. My early academic training was in philosophy. So we did cover ethics and of course ethics is a fairly broad discipline, you know, looking at

morality, you know, what is right and wrong. And then, of course, you know, there are different aspects of ethics, there's applied ethics, which is, you know, the application of these, you know, particular discussions or case studies, looking at specific domains. So research ethics is one example of applied ethics.

Design ethics is another. I see them as two distinct disciplines. One, design ethics is looking at the welfare of people who use our designs, right? How we, you know, how we treat them as they use our products or services, how we potentially collect and use their data, because that's obviously a big aspect as well. How, you know, we think about, you know, the long-term implications of design

you know, the way that our designs are used, how they could be abused by people for nefarious reasons and for nefarious results. And then, of course, research ethics is that, you know, it's concerning the welfare of people in research activities. So, yeah,

Some people think of research ethics as being part of design ethics because part of user-centered design is doing research with users. And so you can think of design ethics as an umbrella, but I tend to see them as separate things, separate disciplines that require different activities. But they are both underpinned by central concepts. So concepts like

respect for people, respect for persons. So thinking about, you know, each person has a right to make their own choices and we have to respect that and has a right to choose their own actions and to kind of live their life as they would like. So we should respect that and treat people with dignity.

you know, we should do no harm, right? And this, again, is a central concept that underpins, you know, lots of applied ethics. So thinking about in research, for example, we don't, you know, open up that person to risks. We don't cause them to get upset, or we don't, you know, accidentally inadvertently leak their identities to people who might

you know, take actions based on that. So employee research, for example, is a tricky one because often it's very difficult to do anonymous research. And as a result, sometimes, you know, people admit things to us, you know, to the researcher, and then that becomes known to perhaps their managers and there could be consequences to that.

But in design ethics, you know, doing no harm is, you know, there are lots of ways that we could potentially harm people. You know, we could cause them to become addicted to the design.

products and services that we create. We could cause people to be marginalized as a result of the way that our products and services are designed. We could cause people to become stressed or overloaded by all of our notifications or alternatively, bullying and harassment on certain platforms as a result of the way that things have been designed. So lots of

you know, negatives that can come out of design, unfortunately. And that's, again, you know, do no harm is one central concept that underpins both of them.

And the last one really that I think is important is justice. So ensuring that it's equitable. Everybody who, regardless of which kind of user is using your platform, they all experience hopefully the same burdens and the same benefits by using your particular product or service. It's not the case that there's going to be one group that

is excluded or carries all of the cost, whereas another user group carries all of the benefit, that would be unfair. And the same, again, is this concept underpins research ethics as well, thinking about who we're recruiting. Is it equitable? Are we having representation from stakeholders?

All the various groups, you know, out there and they're contributing to this process to hopefully build a better product and service for everybody, not just for a select few people. So similarities, but they do, they're kind of separate domains. Got it. So it's basically you take those similar ethical principles, but you're applying them in different ways. You're applying it in design, in business.

In terms of looking at how that design presents information or how that design takes in information. Whereas when you're looking at research ethics, you're looking at the process of gathering that data and how that very process can impact those people, whoever it is you might be researching. Yeah, exactly. Yeah. So this is, I love this topic because it makes my brain hurt, but it also is really important. And

When we have ethical considerations in our design, we often have better designs. But it's interesting to me because I think people also assume that, oh, if it's a good design, then it is ethically considerate. Like if it's user-centered, it ideally already is accounting for user desires and user needs. So if users are picking the technology that best fits their needs, then hypothetically, it should therefore be the best design

ethically that bubble up to the top or that become mainstream. But what do you think of this? Like, do you think that's true? Or do you think that companies can sort of game this where, you know, maybe a user need is being met,

but somehow it's still an unethical design? Have you seen any patterns like this? Yeah, it really does depend on your definition of user-centered. If user-centered is thinking about all of the possible negative consequences that can come about by

by people abusing your particular product or service, and thinking about perhaps long-term costs, then yeah, you know, in an ideal world, maybe the design as a result would be ethical. But the reality is, we're not the only person responsible for delivering a product or service. We work with other people who have different objectives. Often, we don't have control over things like what kind of data is collected about

these users? How is it used? Where is it stored? And we often don't think about long-term implications. Like maybe we can gather a lot of data about this specific individual and then sell it to a load of third parties that can use that to profile you and target you.

You as the user are not going to necessarily know that. That's what's happening when you perhaps, you know, agree to some terms and conditions and you sign on and use that product or use that service. But that product or service can still meet your needs, right? And we, and perhaps some of that data is going to be used.

to improve the user experience by making it more relevant, perhaps, you know, the content or the products that are offered to you are more relevant as a result, it's more convenient. But maybe, you know, some of the negatives, you don't feel that for a while until things get kind of out of control or until you have a situation where,

Suddenly, maybe you're being denied a mortgage or something along those lines. And that data has come from somewhere else. We see these unintended negative consequences. And particularly if it's something like social media, great, it's connected people initially. And there were lots of benefits and designers focused on those benefits.

But there were lots of unforeseen trade-offs. Some of these things that we've observed over the last few years are really almost unprecedented. You know, how people would use technology to censor other people, to bully and harass, to create fake information. All of these are negative consequences of the design.

Um, some of those could have possibly been avoided. Some of them possibly not. Um, and that's the job of an organization and designers to think about how can we solve these problems, um, in a way that doesn't necessarily remove the benefits, but avoid some of those harms. Um,

I want to give actually one example of like an unforeseen trade-off that we talk about in, because I teach a class called Design Trade-offs and UX Decision Frameworks. And we have a particular case study that we talk about, which probably many people who are listening to this are aware of this case study, but it comes from Airbnb. And in the early days of Airbnb, you know, this was a really new way of offering rooms to people or, you know, lodging to people.

And in the early days, Airbnb designers really wanted to make it very easy for hosts, people who are offering up their own homes, to feel comfortable allowing guests into their homes.

And so some of the design decisions were around, you know, giving hosts enough information, as much information as possible about the person who is requesting a particular lodging so that, you know, the host can feel like, yes, this is a, you know, decent person. This person's not going to destroy my home. I'm happy to allow this person to come in. I can create a human connection with a person who might be staying in my home. And then what they found was that there were lots of

of discrimination. So people who, you know, had foreign sounding names or people who had, who were black and had a black sounding name, a lot of those people were getting rejected a lot of the times by hosts. The hosts had the option to say, sorry, you know, I'm not a host.

I'm not going to approve this particular, this particular stay. But there was independent research, some experiments that were done by Harvard University, and they found, you know, that these, that people with foreign sounding names or black sounding names were more often, more likely to be rejected. Wow. So huge problem, completely unforeseen, but

by the design team, but just shows, you know, that, you know, design can be used in ways that you don't anticipate. And it's really important to think about these as much as you can in advance, think about what's the worst possible outcome that could come about of this? How could this particular design be abused by others?

So putting your sort of negative thinking hat on and thinking about what are some potential unforeseen trade-offs that could occur and how could those impact people that perhaps use your product or service negatively is going to be really important as well. Yeah, that's a really interesting case study. Thanks for sharing that. And I think really highlights how difficult this topic and this responsibility is for designers. Right.

I know that there are some large corporations out there that have design ethicists on staff, that their primary responsibility is to examine the ethics of certain design decisions. I realize, though, that it's probably a tough ask to ask every single design agency and organization out there to have a design ethicist on

But yeah, short of having a design ethicist on staff, you know, what tips could you give teams who are looking to ensure that their designs are ethical or are reducing, you know, any amount of harm that they might be unintentionally inflicting? What can people do to really ensure that they're making good decisions?

Yeah, that's a great question. And yeah, you're right. You know, a lot of big corporations, big companies and big organizations are looking to have people in-house that can think about these really tricky things because, you know, a lot of designers are working on projects that have tight timescales, like they need to deliver something. There's very little time to think at this sort of broad level, thinking about like long-term outcomes, right?

you know, running long-term sort of experiments or, you know, ways of capturing good research data about how these things are being used. Or it's just thinking deeply, like that's, we should spend time thinking about it. But unfortunately, the reality is that a lot of, you know, people working on design projects don't have that time. So I think it is important to have people in the organization that can start to think about those things. But I don't think that's the only thing we should do. And I don't think that's sufficient.

So I do think that it's everybody's responsibility. It's not just, you know, the UX designer's responsibility to think about this. It's everybody's responsibility. You know, at a leadership level, hopefully, these conversations are being had and thinking strategically about how to ensure that we're not setting ourselves up for, you know, being in a situation where we have to reverse certain decisions or, you know, spend a lot of time rethinking how we think

how we're doing things to try and avoid harming people or neglecting certain groups or whatever that outcome might be. So perhaps considering your personas and looking across all of your personas and say, well, how does this design decision affect each of these individuals? Are there any negative consequences? What are those? Let's expose them. Let's go away and do some research to figure out if we can avoid them. When I was studying my master's, my master's was in human-computer interaction with ergonomics. And I remember very clearly the

the lecturer, the teacher talking about, you know, when you design systems, you also have to design for people who might abuse the system, those abusers that don't use the way the system is intended. And I think we forget about that. We often have this sort of rosy glass view of like the world around us. And we're thinking, oh, this would be wonderful for users. But we forget to think about, you know, how people could perhaps exploit the design. So I think planning for that is really important.

But then caring about people and people are not just digits, right? They're not just numbers. And I think some organizations fall into that trap where they think about people as a number and not as a person. And I think, you know, that's why it's really important to do qualitative research and to do, you know, research with your, you know, your real users, go out and speak to them and learn from them.

On that topic, if you could offer any advice to teams about how to keep a long-term perspective, because as you said, it can be so easy to get tunnel vision on, you know, the success of our designs and metrics are great. They can be very useful. They can also be, you know, a bit short-sighted depending on how we look at these metrics. So is there any advice you could give teams on

on staying focused on long-term effects and long-term gains as opposed to shorter-term gains.

consequences of designs. - A lot of teams are relying on quantitative data or they're obsessed with metrics and they design purely to improve those metrics and not actually going out and doing qualitative user research with people who'll be using their products and services and really getting to empathize them and learn about them and understand how those products or services affect them. So I think that's a really important thing

that a lot of teams are unfortunately not doing. Maybe thinking about bringing in users into the design process, doing participatory design or co-design, particularly in contexts where there are a lot of ethical issues and this could affect certain groups. So therefore, can we not bring them into the design process and allow them to help us create better designs that are more ethical?

And the last thing I would say also is think about your most vulnerable users or the people in society that could be the most vulnerable and designed for them. Include them in research and think about them because they're often the people that get marginalized or have no choice in what kind of products or services they use. They have very limited options available to them, whether that's because they have certain

you know, disabilities or accessibility needs or whether they belong to, you know, particular socio-demographic group, but involving them and doing research with them and thinking about how this could impact them is going to be really important because they're often the people that are negatively impacted

So a lot of different ways that teams can think about trying to improve from an ethical point of view. But there isn't like one quick fix. You can't just hire a person and suddenly all these problems are gone. This is a complex domain, requires continuous iteration and research and design to get around some of these really big systemic problems.

Yeah, that last phrase you used, systemic problems, that, you know, taking a moment to recognize that a lot of these are often larger than the design itself. There's actually a really great TED Talk I've heard once by this fantastic speaker named, and she's a computer scientist researcher named Joy Boulamwini. Probably butchered that name, but highly recommend checking it out. She's the founder of the Algorithmic Justice League. And

part of her research goes into how algorithms, you know, are really just reflecting the processes and the things that exist in the world, right? Because as we talked about earlier, a lot of the ethical considerations and a lot of the challenges that designers are facing are often larger than the design itself, right? Systemic level issues. And I love that you brought up that bit about vulnerable users because I

And sometimes it's just a reflection of the system that we're automating, not necessarily a reflection of any individual's, you know, biases, although certainly that can come into a design as well. But I think keeping all of that in mind and knowing that each of our designs has a role in either reinforcing some of the pre-existing systems that are in place or

or in hopefully equalizing and making an equitable outcome, as you said. So I think as long as designers are keeping that long-term perspective and understanding that every

item and every little widget that we design has the opportunity to either make great changes or, you know, continue the status quo. And hopefully the great changes are what comes. Yeah. But I think the, the point that you made is a, is a really good one. Unfortunately, values pervade everything we do. And even if you think, think you're being objective, you're not, you're applying your own values to things. And unfortunately, if we have

a group of designers and they all come from similar backgrounds, they all have similar experiences.

you use that and you apply that without even consciously realizing that in the things that you design, in the way that you design things, which is why we say at Nielsen Norman Group, you are not the user, right? And pretty much everyone knows this slogan who works in UX, you are not the user. So, you know, you shouldn't expect that you know how people are going to behave or know what they need or know how they're going to react to certain things.

It's so important that you have that representation in the design process, not only hopefully by recruiting a diverse team to work in your design team, but also thinking about including, you know, as much as possible, a diverse group of people in your research process. So hugely important if you want to avoid making these massive mistakes that a lot of companies and organizations have done, especially using

AI algorithms where, you know, there's just propels like stereotypes or, you know, continues those biases that we all have that is represented in that algorithm. So we need to do better. Absolutely. I think that's an inspiring note to end on. This will be the first episode of January as design teams are looking to make resolutions. I think we all can make a resolution to

make designs that are truly beneficial for all. So,

If others want to follow you, maybe work that you're currently doing or working on, where would you recommend people follow you or check out some of your work? Of course, the Nielsen Norman Group website where I publish articles, but I do share some of those articles and links to reports I've written on my Twitter account, which is, let me remember because I

One is a hyphen, one is underscore. I think it's Maria Rosala. And between my first and last name is a underscore for Twitter. And then I'm also on LinkedIn as well. So Maria hyphen Rosala on LinkedIn. This has been fantastic. Thank you for giving me delightful brain hurts as a former high school teacher used to once say the brain hurts are what make our work worth it. So I appreciate you. Thank you for your time. Thanks for having me.

Thank you for listening to this episode of the NNG UX podcast. If you want more information on any of the courses or resources that we cited in this episode, check out the links that we've listed in the show notes found in the description of the podcast.

We have a number of upcoming UX certification events as well, some as early as late January, and we publish free articles and videos every single week. So definitely sign up for our weekly newsletter if you want updates on the latest UX research that we've been working on. To learn more, go to nngroup.com. That's n-n-g-r-o-u-p dot com.

And of course, if you like this show and want to support the work that we do, please hit subscribe on the podcast platform of your choice. Thank you so much for your support. Until next time, and remember, keep it simple.