cover of episode #54 AI and User Research: A Deep Dive with Kate Moran

#54 AI and User Research: A Deep Dive with Kate Moran

2024/2/1
logo of podcast Future of UX | Your Design, Tech and User Experience Podcast | AI Design

Future of UX | Your Design, Tech and User Experience Podcast | AI Design

AI Deep Dive AI Chapters Transcript
People
K
Kate Moran
Topics
Kate Moran: 本期节目讨论了AI在用户体验研究中的应用。Kate Moran分享了她12年的用户体验研究经验,以及她对AI在研究流程中应用的看法。她认为AI可以帮助减少研究中重复和繁琐的工作,例如头脑风暴、制定研究计划和撰写访谈问题等。但是,AI不能完全取代用户研究,因为AI工具的质量参差不齐,而且AI可能出现误读和幻觉。Moran建议将AI工具视为研究助手,而不是研究人员本身,并强调了在使用AI工具时,需要保持批判性思维,并仔细检查AI的结果。她还讨论了AI在内容创作中的应用,以及如何使用AI工具来改写内容,同时需要注意AI可能改变内容原意的问题。此外,Moran还谈到了AI引发的伦理问题,例如数据隐私和知识产权问题,并建议在使用AI工具时,注意保护用户的隐私,避免泄露敏感信息。她认为,在AI时代,内容创作者需要寻找新的方式来推广其内容,并呼吁AI工具开发者在设计AI系统时,要考虑到伦理问题,并确保AI工具能够以安全和负责任的方式使用。

Deep Dive

Chapters
Kate Moran shares her journey into UX, starting from her early interest in the field and her experiences transitioning from teaching herself front-end development to becoming a UX designer at Nielsen Norman Group.

Shownotes Transcript

Translations:
中文

Hello and welcome to another episode of the Future of UX podcast, your number one resource when it comes to the future, new technologies and design trends.

And I'm having Kate Moran with me in this episode. She's the VP of Research and Content at the Nielsen Norman Group. And we had a fascinating discussion about AI in the research process. As you all know, I am a big advocate of generative AI and especially designers using these new tools.

And Kate provides plenty of tips and resources and how to exactly and where to use generative AI and large language models. We also talk about some critical issues as caution is necessary, of course. But more on that in this wonderful podcast episode with Kate. I wish you a lot of fun listening.

Okay, so hi, Kate. Thank you so much for coming to the podcast. Welcome. Thank you. I'm so happy to be here.

Sure. Very happy to have you. So before we are diving into the topic and into all the content that we want to talk about, please introduce yourself, tell the listeners a little bit about yourself and your background. So I'm Kate Moran. I am a VP at Nielsen Norman Group. I'm actually VP of Research and Content, which are my two favorite things. So I'm very happy in this role.

I've been working in the field of UX for about 12 years now. And in that time, I've held a lot of different roles. You know, I always love hearing people's stories about how they ended up getting into UX because often they have these very interesting stories about, oh, I was in fashion design or I was in industrial design. And then I got inspired and I moved into this field.

I don't have a cool story like that. I am very decisive. And as soon as I learned about UX, which was when I was probably about 18, 19 years old, I was like that I'm going to do that. So yeah, I just it just it really spoke to me. You know, a lot of people say like they have similar attractions to the field. It's people, it's technology. And that definitely was true for me.

But as many of your listeners probably know, it's kind of hard to get that first UX job. Usually a lot of companies are looking for some amount of experience, which I didn't have. So I started off teaching myself front-end development because I noticed there were a lot more job postings for developers at the time than there were user experience professionals. And really enjoyed that for a while, then moved into content, content management, copy editing, and then

And then from there, I got my first job as a UX designer at an agency and then got recruited for Nielsen Norman Group when I completed my master's degree. So I have an undergraduate and a master's degree, both in information science. So I spent a lot of time studying human-computer interaction, information seeking, really fun, nerdy stuff like that.

And when I was finishing my master's degree, I was working on my thesis, which I decided to study flat design. It'll kind of date me a little bit, but people remember when flat design was like a really hot topic and kind of controversial. So I did my master's thesis on flat design. And I remember at the time feeling like I was working so hard on this thesis and kind of wondering to myself,

why? I knew I was going to get my degree. Why was I working so hard on this? I remember thinking to myself, very vividly remember, this is just going to go in a pile with all the other master's theses. No one's ever going to read it. The irony of that was that Jacob Nielsen actually saw my master's thesis and

thought, oh, we could just publish that on our website right away. We could publish that on nngroup.com. So he reached out, invited me to apply, and everything kind of went from there. So I love a lot of things about working at NNG, as we call it. One of my favorite things is it's a really great platform to contribute and give back to the field of user experience.

So I really enjoy that, just meeting different teams, talking to them about the issues they're facing and then thinking, okay, how do we study that? Or how do we help find a solution for that that we can provide? And that's certainly where we are right now with AI is really talking to a lot of teams, hearing their questions, their concerns, and there's so much confusion. And we're working really hard right now to try to demystify some of that.

Yeah, cool. I love that. First of all, thank you so much for sharing your story, how you got into UX. Super inspiring. And I can totally relate because my background is graphic design. So I also switched to UX and I know about the struggle. So really cool. And yeah, so you just mentioned the interviews you're doing, the research you're doing, especially on AI at the moment. And

I think it's super fascinating to see the research studies that you're doing on AI, like the apple picking, accordion. These things are super interesting because they're really helpful for design teams nowadays to really understand AI, how to use AI, but also how to work with AI in AI products, right?

And you as a researcher, how do you see AI in our workflows, in our research workflows? You know, on Instagram, I get a lot of questions usually about like how to use AI in research workflows.

What is your answer? What would you say? Well, that's a big question. And I think that's something that kind of as a field, we're figuring that out right now. But I can say that I am seeing lots of opportunities to reduce the redundant kind of tedious parts of research. And look, I love user research. You know, I...

I remember the early days when, you know, if you were doing a quantitative study, we didn't have video recordings. So we had to use a physical stopwatch, like a mark when somebody stopped and started a task. So I just, I kind of, you know, over a decade plus, I've been thrilled to see so many new tools enter the market. As the field of user experience has grown, so have the sort of support industries grown.

So today, recruitment, facilitation, certainly remote research, analysis, we have this wonderful collection of support tools that make conducting research a lot easier and more enjoyable. And I really see these AI features and AI tools as being the next iteration of that.

Now, right now, there's kind of a gold rush happening. You know, there's so much interest and not just in UX, of course. There's so much interest in AI. Every company, literally like every company right now is like, we feel like we should integrate AI somehow. How should we do that? As a result, I am seeing a lot of variety in the quality of offerings for like AI tools specifically for UX research.

And I would say that there are also a lot of marketing claims being made that I encourage UX researchers to be very skeptical about.

But even if we think about, you know, something basic, like a tool that probably if any, if you talk to someone who has tried any AI tool these days, it's likely to have been ChatGPT. Even with, you know, that tool, which is by no means specialized for UX research, there's a lot we can do with it. It can be a really helpful tool, for example, for brainstorming, coming up with different research questions or ideas about what you can study.

It can help you create a research plan, figure out what needs to go in that plan, can help you write interview questions and usability testing tasks. So there's a lot of opportunities already to use AI tools to support our research workflows. What makes me nervous is there are some companies who seem to be

either marketing their product as or kind of moving towards this goal of feels like almost replacing UX research. And I have a lot of thoughts about that. I can imagine.

I think that UX research can be expensive. In some cases, it doesn't have to be, certainly, but it can be expensive, time-consuming. It can be, I think, deprioritized in a lot of organizations. And so I think there's an interest in how do we shortcut this.

And that's what makes me nervous because I don't think that AI is ever going to replace the need for UX research. I don't think it's ever going to replace the need to talk to users, even though there are companies right now that are making product offerings that kind of claim to be doing that. Yeah.

So I would definitely, yes, I would definitely encourage UX researchers to be a bit cautious and a bit skeptical when they're looking at a new tool to use in their workflow. So skeptical, I think, is a great starting point, right? But those UX researchers who say, OK, I want to use AI tools.

Maybe I also want to use these tools that pretend that we don't need users.

Is there a way to use them strategically? Or do you think it's much better to use AI tools like JetGPT for more like the strategic part of the planning and organization? Yeah, I think especially with the tools that are in the market right now and kind of where we are with AI currently, obviously this is changing every day, so it could change very soon. But kind of where we are right now,

I really encourage UX researchers to think about these AI tools as kind of like a UX research assistant, not a UX researcher and not a coach, not an advisor. I think that especially for new researchers, UX is very nuanced. And there are a lot of, you know, we have this phrase we use constantly, it depends.

Because UX is very contextual and that certainly applies for research. You know, a methodology that works really well in one situation isn't going to work as well in another. And it's very context dependent. So I have seen a lot of AI tools struggle to understand the level of nuance and complexity needed to run really excellent UX research.

So for now, I would recommend don't think about these tools as being a full replacement for the need of UX research or a, you know, a UX researcher that you can trust to, you know, deliver recommendations that you'll just accept without question. And for people who are newer to UX research, I would recommend an extra ounce of caution.

just because it does make it harder to catch those moments where the AI is hallucinating or it's giving information that's factually incorrect or it's telling you to run a study in a way that is not actually going to be best for your goals. So that's kind of the skepticism I would encourage UX researchers to think about. Treat it as this is a tool that needs a lot of oversight and you need to check its work, just like in midterm.

Not, I'll trust this to tell me how I should improve my product. Makes total sense. And what are your thoughts about using AI for synthesizing? So basically for the defined phase. So I think that AI is an excellent synthesis tool. So it does do a really good job. You throw a bunch of information at it and it can distill it down for you.

What I have seen, though, is that every now and then it will misinterpret something. It will hallucinate and maybe pull out a user quote that wasn't in the research at all. So using tools that are capable, that have a focus on avoiding those issues and have a realistic perspective on AI's limitations is

That's really important to me. So, for example, I was just recently I was speaking with a team at a company called Wevo. So it's not Weemo or Waymo. It's Wevo. And I have to say, I haven't used this product before, so I cannot vouch for it. But I was chatting with the team about their approach, and they like to describe their tool as human augmented AI.

which is, I think, the right way to think about it. You know, you still need that oversight. You still need that critical thinking. You still need to be reviewing things. Similarly, another tool that I like right now that is not UX research specific, but is, I think, doing a good job of understanding the limitations of AI is perplexity. That's kind of, for me at least, replaced Google in terms of most of my information seeking.

And what I like about that tool, and I've also spoken to their design team and

they have a big emphasis on citing sources and citing them in a very detailed way. So one of my favorite features of Perplexity is they have footnotes next to individual sentences or pieces of information that cite the specific source. So, you know, that again, it's not a UX research tool, but it reflects that they're aware of some of these risks of AI and designing their systems in a way that is

is going to try to account for and correct those risks. So you just brought up such an important point with the quotation, because also something I really would like to hear your thoughts about, you know, when you use ChatGPT at the moment or also other AI tools,

You don't really know where the answer is coming from. What is the input? There are no quotations at all. If you ask for links, then sometimes you get something. Sometimes. But you're not really sure. Yeah, and then it's not correct, right? So it's not always working. So how do you see that big topic at the NN Group, you know, when it comes to content marketing, you know, all the content that you prepared over the last years, you know?

All the articles, everything that is out there, it's, you know, it's immense. And you don't really know where it is at the moment, if it's used, any of these AI tools. How are you dealing with this situation? Do you think it's problematic or do you support that actually? You know, I would summarize my entire approach to AI in all its various forms and applications as simple.

cautiously optimistic about supporting and also trying to be realistic about the problems that we're going to encounter. And I do believe in the long run, we're going to iron out a lot of these things. It's just kind of like the newness of this technology in these different applications. But I think in the short term, we do need to acknowledge that there's going to be some chaos and some

you know, systemic change that's going to have to happen. So content marketing is certainly one aspect of that that I have my eye on. A lot of people don't know this about Nielsen Norman Group, but we kind of have a, we have kind of two different priorities in terms of the work that we do. We're trying to provide really high quality, reliable, research-backed UX guidance to the world for free.

But we're also doing that so that we can raise awareness about our paid offerings like our training courses because we're not actually a nonprofit. So we do still need to fund that free content that we provide for people.

So historically, over Nielsen Norman Group's last 25 years of existence, the way that the company has done this is producing these articles, which get really great SEO, drive a lot of traffic, build awareness for a brand name, which eventually a very small segment of that audience ends up reaching out to us for consulting or for training. So that's kind of how a lot of companies work. That's essentially content marketing.

Now we're seeing what happens when our content has been hoovered up, and we know that it already has been, by lots of different AI systems, including ChatGPT, used to train those systems, which it was freely available content. So, you know, so they just sucked it up. Now in a reality that we're moving towards, you know, I just mentioned that perplexity has replaced Google for me.

That is going to be increasingly the case. And right now it's kind of early adopters like us, people who work in technology and are interested in it, but very soon it's going to become much more mainstream. Jacob Nielsen actually just recently wrote a post about how SEO is dead. And then in the article he admits it's not dead yet, but it will be dying over the next couple of years. It's in its, you know, death throes, I guess you could say. So in a world where

We are not getting as many page views. People are not visiting our website directly, but instead are going to something like ChatGPT or BARD or Bing and saying, I have this UX problem. You know, I, you know, for example, I need to run a longitudinal study for work. I've never done that before. You know, what kind of study should I run and how should I set it up?

In our current situation, people are asking ChatGPT to do that. And we know this because we conduct research with our own users. Of course, we've seen how their behaviors are changing. What is happening is tools like ChatGPT are returning Nielsen Norman Group advice to

But they're also mixing it up with other sources of advice on the Internet. And as you probably know, there's some really, really good advice out there related to UX. There's also some really, really bad advice. Oh, yeah. So that's all getting muddled together. And...

We aren't really getting credit for it. So as you just said, you know, it really struggles to cite its sources because all the information goes into the black box and then it spits out a response, but can't really trace where that came from. As a result, we're not getting our name out there. And so that does defeat the purpose of content marketing. It also makes me nervous for UX professionals who are going to be getting advice from people

This tool that doesn't really understand UX or UX research and, you know, has a difficulty discerning what sources are reliable and good and which ones are not. So that's definitely concerning for us as a business. It's concerning for UX as a field. But again, tools like Perplexity really...

give me a lot of hope because I'm seeing that there are companies, there are organizations who are working on information seeking AI tools that are more focused on reliable information and quality information and citing sources.

Funnily enough, early, right when, you know, all this chatter about ChatGPT started earlier this year, we were playing around with it, of course, like every team was. And we were talking about it on our internal Slack. And one of my colleagues, Kate Kaplan, she asked it to write an N-group style article to see like how well it did.

And it did okay. And she asked it to cite sources. And it made up a fake NNG employee that has never existed. It hallucinated. No way. And an article that doesn't exist. So, yeah. So, again, I think that the citation issue is a concern.

But I know in your interview with Sarah Gibbons in one of your earlier episodes, I heard her mention that curation is really important in this new era. And that's definitely what we believe at NMG. So one thing that we're looking at now is, you know, how do we provide our information with the ease and the context and the customization that's available with an AI, you know, conversational tool?

but is specifically fed on our material. I do think for a lot of websites and organizations, that will be the direction they'll want to move in. Curation and ensuring this is not everything on the Internet. This is just this one set that we can put our name on and say, "We did this. We built this library over 25 years. We built this library of literally thousands of UX resources."

But we need to deliver that, I think, with the convenience and the capacity of AI. And would this be rather something that you integrate into GPT, like building your own GPT with all the energy knowledge? Or would this redesign your website completely different, more like GPT with like a text version?

a prompt into action? Yeah, that's a great question. And those are questions we're thinking about right now and exploring. And I think a lot of it depends on what vendors emerge, kind of are emerging now and will continue to emerge in the space of customized chatbots. So I was really excited, like a lot of people, I was super excited when the GPTs, you know, the custom GPT feature was released and went and played with it and was able to make some pretty useful GPTs. Like

I do a lot of copy editing as part of my role. And so it's really nice to be able to have a dedicated little like copy editor bot that already understands I'm editing these articles and, you know, what I'm looking for.

I tried to make a GPT that was just Nielsen Norman Group. And, you know, only consult these sources and try to speak in Nielsen Norman Group's tone of voice and always cite sources and it couldn't do it. It did provide some information that was

Factually incorrect and not specifically scoped to our website, which makes sense. I mean, it's still it is still chat GPT. It still has all of the knowledge in the Internet that it's been trained on, and it probably can't separate that just for a custom GPT.

But what I'm seeing a lot of right now and certainly talking to a lot of vendors about this is these companies that have new offerings that like, you know, this is a custom bot. You tell us what to feed it. We'll train it on your data and we'll use it to create a high quality chatbot. I think earlier this year, that was something I was...

interested in, but a lot of the vendors that I saw, they just weren't doing a great job. So I think the technology has been improving since then, and I'm starting to see some really pretty good ones. So one that I kind of have my eye on right now is Astral, which is a company that specializes in creating these little custom bots. Like GPTs. Yes. Mm-hmm.

Interesting. I mean, I also played a lot around with GPTs. I created some good ones, but I'm still struggling with one that gives me feedback on designs. Although I personally don't need it, but I would just like love to have that. Sure. And what I realized is that it's also, I mean...

It's so difficult for a machine or for an AI to give me feedback on a design, but it's also very difficult for a human being to give me feedback on a design if I don't give enough background information. So I need to provide so much, you know, insights for each screen or each question that I have that I'm wondering if it's worth it. It's definitely a lot of work.

It's a lot of work and the answers were very generic, I would say, from like basic usability issues like, you know, make the fonts bigger, basically.

Of course. That is the number one issue I've seen with a lot of these tools that are like, we'll do an evaluation of your product or our fake users will give you feedback about your product. All of the feedback that I'm seeing from, and we've tried lots of different tools for this, all of the feedback, sometimes it's just so...

straight up factually inaccurate. Like Baymard recently did a comparison study where they looked at, you know, what AI tool, I think it was specifically chat GPT that they tested, you know, what kind of design feedback it gave versus humans. And some of the recommendations were flat out wrong, but many of the others in their study and also in our experience, they tend to be things like make the navigation easier to use.

Oh, thank you. Oh, I didn't realize that was important. Thank you for telling me. Thanks for nothing. No specific things. And also, like, I think I see a lot of these tools struggling to provide any rationale. And this is, again, where I'm saying, like...

That's something that probably an intern might do, like somebody without a lot of UX experience who isn't really sure what to look for and isn't sure how to give useful recommendations. Sometimes, you know, that's the kind of stuff they provide. So which is fine, but just for somebody, especially for somebody who's maybe more advanced, may not be all that all that useful. That's why I think it's good to view it as a supplement and not a replacement.

I mean, when you were playing around with it, did you find that some of the recommendations were useful? Did you get anything useful out of it? Yeah. Some things were definitely useful. I mean, useful is a thing, right? I mean, I guess it was not surprising, but I think it would be useful maybe for other people.

Well, so that's so funny because we, so I've obviously, I spend a lot of time in user research sessions. And one thing that we hear from people this much just because of nature is like, I don't like this, but maybe somebody else would. Just to be nice. Exactly what you just said. Maybe. You know, I think like, I guess the main question I would say is like, did you feel like there was anything that you wouldn't have thought of on your own that it was able to give you?

No. Yeah, that's the problem right now. Now, I think eventually we're going to have, I think really the key here is we need to have more of these tools that are specifically made for UX researchers and supporting them. I think eventually we are going to get to the point where that won't be the case. You know, we're already seeing that chat and GPT can increase people's creative work and the range of ideas that they'll end up coming up with.

And so it's a great ideation partner because ideation is just spitballing ideas and then the human has to choose one. So that again is kind of like that, you know, instead of human augmented AI, it's AI augmented humans, I would say, is the way to think about it. That makes a lot of sense, right? Like really finding these areas where AI is helpful and then using it or using it as an assistant or as a partner.

but not in all parts of the process, right? So that's the best thing. So I feel when I'm looking around, especially at the design community at the moment, I'm seeing a lot of people who are really hyped about AI tools, who are trying out a lot of things, who are really curious, also cautious, and also some people who are a little bit more skeptical and a little bit scared of how to use it properly.

to not obey any privacy problems or, you know, don't get into any ethical problems with maybe companies, especially when it comes to research, right? So I'm curious to hear your thoughts about it.

Do you have any tips for people how to use it in a safe way to not harm companies, people, privacy issues? Yeah. So that, I mean, that is a huge question again, that like, I think everyone who's spending time thinking about AI right now is asking is,

How do we try to do this in a safe way, a way that's not detrimental, that's ethical? And I don't know that there are a ton of easy answers right now. Certainly in terms of privacy and data protection, it's important to know that if you're using something like ChatGPT, it's probably they say they're not training on that kind of data. But I would be very skeptical about that. I would be very cautious in terms of what information you share online.

with these AI tools. And that can be a huge obstacle for research specifically. If you're working on a top secret, you know, new feature for this highly competitive product space, probably don't want to put that into the GPT because you don't know where it's going to go. And you certainly cannot put any

sensitive participant information into these tools. So you should never, ever be sharing your participant names or any kind of like address or email address information or phone numbers, nothing like that should be going into these tools. So you may find that you have to do a lot of work to sanitize the data before you share it with one of these AI tools. And it's really important to do that to protect people's privacy.

Yeah, that's, I think, a good reminder and something I feel that is not too difficult to do, right? Like to just remove all the sensitive information and just think about what are the things that I, you know, allow to share. They're not super highly secreted because they are always things that

That are not so secretive, right? That it's just like repetitive writing, I would say, at least from my experience. Right. Or it's like it's like filter design and it's you're just trying to be consistent with every other filter designed in the world. That's really fine. Yeah, I think I think the ethical question is another one, especially in terms of.

intellectual property. And again, this is something we are super sensitive about at Nielsen Norman Group because that's what we sell and that's what we give away is intellectual property. So like we really don't have any products or physical things we sell. We basically sell the ideas that our really smart staff come up with. So we are pretty vulnerable to a world where suddenly we're kind of

As a society, I feel having to have a conversation about what is an idea and what does it mean to own an idea? You know, when we know that ideas are usually a result of synthesis, of kind of pulling in inspiration and inputs from multiple places.

You know, what does it mean when AI is doing that instead of humans? You know, obviously with the screenwriters guild strike and there have been some high profile examples of these concerns. And there's been a lot of discussion on AI generated art that in some cases is directly ripping off the hard work.

that human beings have put into it and they're not getting yeah not only are they not getting any kind of compensation they're not even being credited and they kind of can't be credited in some in some sense of limitations right now it's a huge issue it's definitely something

We're sensitive to because we're experiencing that with our intellectual property. And we're also thinking about, so for example, we recently got into a discussion internally about should we use AI art in our articles? Because that's a big thing that a lot of people are doing, especially like blogs. We do have a, we have a wonderful design team. And one thing that they really specialize in is taking very difficult concepts and

and translating them into very clear visualizations and using visual metaphors. These AI tools are not that sophisticated yet, so they're not really able to do that. But, you know, we're thinking, is it okay for us to use AI-generated art even as just an illustration? You know, something that's not necessarily critical to the concept.

And I'm kind of leaning towards no right now, just because it makes me nervous to think that we could be ripping off an artist's work and sharing it on our site. But I don't know. I think it's a, I don't think there's a clear answer to me right now. Yeah. I think that's a super important point. Really being aware of where the content comes from, especially with artists who don't know that their content has been used to train these models. So I'm totally with you on that.

And what I feel is that there are also a lot of differences when it comes to these image generation tools. I mean, Midjourney is a great example. Their output is amazing. I mean, the quality is great. What you can do with this tool, I love it. The photorealistic examples you get really look real.

It's so good. But there are also a lot of biases. I mean, there are so many examples online and it has been trained on a lot of artists' work without their consent. And comparing that to other tools, for example, Adobe Firefly, which I find really interesting. I mean, Adobe is a huge company, so of course they can't rip off artists and just use their work. But they are using credits.

for the work that has been used to train these models. And you can also kind of like input your own artwork and then get credits for it if it's used. So they came up with like some kind of like a concept of contributing. Yeah, I also really love that. So I think it's

how all tools should do it, right? So if you want to use a certain style from a certain artist, then you need to pay this artist if they agree that it's okay. If not, then you can't do it. Yeah. But then your tool has to be able to attribute, you know, what sources did it use to create this output? And, you know, systems like Perplexity, they've set it up that way and sounds like maybe Firefly as well. But for some of these other large language models, you know, I think

especially when you're talking again about ideas, that's really hard to trace and give credit to. That's a really interesting model. And I think that's kind of where we are on the internet right now. I mean, I think there's a really scary world in which content marketing no longer works. So companies are not motivated to be providing high quality, expensive content for free because there's no business value for it.

And all that's being generated is AI-based articles. And so we already see this. If you search on Google, you're getting a lot of not-so-high-quality stuff rising to the top. And a lot of it is AI-generated. So there is a...

real possibility that we end up with what used to be, you know, an internet that was full of lots of bad information, but also lots of good information. We end up in this world where AI is just in a little feedback loop with itself, being used to create all this bad content that's just going back into the systems. Yeah. And that's not what anybody wants, I don't think. So in a world like that, the only way I see

content creators being motivated is creating kind of walled gardens where you have to pay to get access to the high quality stuff. And I really don't want that. I think access to information is critical for humanity and for its development.

And it's also an ethical issue. So, you know, I hope that it gives me hope to hear from teams like Perplexity and like you just described with Firefly, these organizations that are thinking about how can we do this in an ethical way, both because it's the right thing to do. And also, it's the only way to get people to keep creating good things on the Internet. Yeah.

I agree. But I also think that a lot of people will use the tool that is the most convenient for them. Totally. Best output. And they don't really care about artists. They don't care about other people. They only care about themselves. They're not afraid to publish horrible or maybe wrong articles. Or I have seen that actually on LinkedIn, people who use NN Group's articles, some kind of like rewrote it somehow and then pretended that it came from other sources.

I mean, yeah, those things are pretty obvious to people who know the articles and know about these topics, but some other people don't. So they feel like, you know, they came up with all of these things and...

That's already wrong. So, yeah, you know, that's an issue that certainly predates AI for us. Like I have found many of my own articles verbatim copied and pasted into someone's Medium post and they're claiming it as their own. Even things like I'll refer in my article to like my husband, like if it's relevant to what I'm talking about, you know, so it's like...

No, they didn't even read it. They just slapped it into Medium. So that's been an issue for us. And even in that circumstance where it's like they're using all of our visual, this is a big problem for Sarah because she creates a lot of really great visualizations of design concepts. People will just take that visual, post it as if it's their own. Sometimes they'll try to crop out the watermark, but sometimes they won't even do that. And even with that much more rudimentary version, there were

plenty of people on the internet who are reading it and they're like, great job. Thanks for designing this great visual. Meanwhile, Sarah's like, that was wine. And I'm sure we are not the only organization creating content that's dealing with that by far. So yeah, that already was a problem. And I can only see that getting worse with AI where it's easier to mask the source.

And I feel AI would be so great to actually check where the content is coming from, right? Like scan the image, look through the whole internet and see where, when it was published first.

Finding a website and then, you know, recommending, hey, you need to quote this as a source or something like that. I'm already seeing that. Yeah, there are definitely companies like, actually, I don't know for a fact that it's graverly or writerly, but some of these like writing apps might be Hemingway, I would guess, might be doing something like this where they are using AI as like a plagiarism tool.

It's just interesting because, you know, there are people trying to develop that to discover, test this piece of content, tell me if it was written by AI. But as AI continues to get smarter and more sophisticated, like, how are we going to do that? Yeah, it's a big challenge. You know, right now I feel like I can read something and tell just because content is my wheelhouse and I spend tons of time reading and I get a sense for different people's voices.

I can read two different pieces by the same author and say, I'm pretty sure he used chat GPT for this one. But that may not be the case. Yeah. That may not be the case for all, you know, forever.

Yeah, I mean, most people, I think, can't really see the difference. And also most people don't read full articles. They're just like going through, you know, some snippets and they don't have the time for it. That's something we're totally aware of at NNG that people come to us and other UX advice organizations, usually because they have a specific problem. They have a specific thing they're trying to learn about. I have to do a diary study. Just tell me what I need to know and nothing more.

And so that's where I think we need to keep doing that to be competitive with AI because that's a big benefit of what AI can do. Yeah, I agree. It would be so cool. You know, imagine entering that in your website and then you get a personalized framework that you can use. Yeah. With a video. That's the dream, right? Yeah.

Let's see, coming in the future. Yeah. Let's see. I will keep my eyes open. But do you have any tips about how to integrate AI, especially for content creation? I mean, you mentioned a couple of things, but did you stumble across some ways that you find AI really helpful as an integration tool?

So a few things. So for content specifically, I love using ChatGPT to rewrite content. And usually I'll have it give me a couple of different options because, you know, the first one's not usually correct.

spot on. So for example, you know, I'll have it if I'm editing someone's work and I say like, oh, this is a pretty long convoluted sentence. I can drop that into chat GPT and say, rewrite this in three different ways to make it shorter. One, you know, there's, there's a lot of things you have to watch out for with that. One of the things is AI does tend to have kind of a distinctive tone of voice. So, and it kind of tends to push content in

in that way of writing. So just make sure that the tone is still what you want when you take it back out of one of these AI tools. The other thing is that sometimes when tools like ChachiPT are editing for concision, they will remove words and create something that is grammatically correct, but misses the point. It just rearranges some of the language in specific ways that

don't reflect the original point of the piece. So there's still some issues there.

This is actually an area that we're looking to provide more guidance on very soon. So this year, we've had a big focus in studying how people are responding to these AI systems, how users are developing mental models around them, these new behaviors like apple picking that we're discovering that people are using as they're interacting with these novel systems.

And that's really been fascinating. We've also been providing a lot of advice for people who are working on those systems. But something I really want to focus more on, especially in 2024, is

is providing very specific advice for UX professionals in how to use these AI tools for specifically UX. So that's something that we're actively working on right now, planning studies, conducting expert reviews and things like that. So if anybody is interested in those things, definitely sign up for our newsletter because those will be coming out soon.

Cool. I will add the newsletter link in the description box. So everyone who wants to sign up, can I hear you, Karen? You can just do it by clicking on the link in the description box. Cool. Thanks. Perfect. So I feel we covered most of the things that I wanted to ask you. Are there any resources that you think are great to...

Check out anything that you would like to recommend the listeners, anything that you think is helpful to just like have a look at. So definitely, as I said, sign up for the newsletter for our fresh new AI and UX content that's going to be coming out shortly.

As well, check out some of our existing articles and resources. We've been, as I said, focusing a lot on studying these new evolving user behaviors. So that is going to be particularly relevant to anybody who is working on projects involving AI tools or AI systems or features. Also, definitely play with some of the tools that I mentioned today, like Perplexity, for example. I think lots of people are...

I think chat GPT is kind of like the gateway AI. Lots of people get started with it. And there are some things that in my experience, it's still the best for. But there are so many new AI tools emerging every day that it's worth playing around with some of those as well. I still have not. I'm kind of on a quest for like the perfect AI UX research tool. And I'm looking around and playing around with a lot of them. Still haven't found anything new.

perfect in my opinion but I do post a lot on LinkedIn about the tools that I find related to UX research so people can connect with me there this was actually my next question where can people find you so on LinkedIn obviously I will also link it in the in the description box anything else or Instagram no Twitter also not LinkedIn is really the best place to find me

Perfect. I think everyone here is on LinkedIn, so they should all follow you. You're always sharing great content, a lot of interesting articles around that topic. Also about a few of the topics that we mentioned today, where you're really going

into depth you're sharing examples also how you are building your prompts you're learning so very interesting great content super helpful yeah and one of my one of my favorite things to do on linkedin is have conversations with people and just hear nice other people's experiences and their tips that they want to share so always happy to to chat on linkedin

I love that. And I think this is an amazing approach, really connecting to each other, talking to each other, especially in times like these, right, where we need to learn from each other and see what works, what doesn't work and helping each other, basically. Super cool. I love that.

Okay, Kate, thank you so much for taking the time, for sharing all your knowledge with us. I think it was super valuable, very helpful. So thank you so much for everything. Thank you. I had a great time.