cover of episode 44. AI & UX Research (feat. Savina Hawkins & Caleb Sponheim)

44\. AI & UX Research (feat. Savina Hawkins & Caleb Sponheim)

2024/11/26
logo of podcast NN/g UX Podcast

NN/g UX Podcast

AI Deep Dive AI Chapters Transcript
People
C
Caleb Sponheim
S
Savina Hawkins
Topics
@Savina Hawkins 认为,Transformer架构的突破使得AI在自然语言处理方面取得了显著进展,能够更好地理解和分析人类语言,这为UX研究带来了新的机遇。然而,AI目前仍缺乏人类在解决问题时所具备的多层次智能和考虑因素,例如商业、人文、社会影响等,因此UX专业人士需要将AI工具视为“初级合作伙伴”,对其输出结果进行审核和把关,避免AI生成虚假信息或不准确的内容。 @Caleb Sponheim 认为,AI应用的积极影响不能孤立来看,需要考虑其成本和潜在的负面影响。大型语言模型虽然易于使用,但要高效使用需要具备编写有效指令的能力,这需要时间和技能的投入。此外,公司裁员可能并非完全因为AI技术,而是多种因素综合作用的结果,AI只是被用来作为裁员的理由。AI可以生成研究数据,但这些数据是基于现有的人类知识,而非对特定产品的用户研究,因此无法替代人类进行用户研究。用户研究的核心在于探索未知领域,这是AI目前无法完成的。AI对某些设计领域(如平面设计)的影响较大,但对UX设计的影响相对较小,因为UX设计需要跨部门协作、数据转化、迭代反馈等能力,这些是AI目前难以取代的。面对AI的冲击,UX专业人士需要继续提升自身价值,关注业务需求,并有效地将AI工具融入工作流程中。 Savina Hawkins强调了AI在UX研究中的应用,特别是Altis系统在分析对话数据方面的能力。Altis系统能够区分事实和推论,并提供多种解释,以帮助用户进行更准确和全面的分析。Altis系统可以加快设计迭代速度,例如在设计冲刺中,可以在一天内完成访谈,并在第二天获得分析结果。此外,Altis系统可以帮助UX研究人员发现新的见解,并弥补人类记忆和注意力的局限性。 Caleb Sponheim建议UX专业人士尝试将AI应用于日常工作中,例如写作或数据分析,以了解AI工具的实际能力和局限性。他建议关注AI领域乐观和悲观的两种视角,并订阅一些新闻简报或周报,以了解AI行业动态。他认为,与AI协作是一项新的技能,需要学习如何有效地引导AI,这与UX专业人士进行用户访谈的技巧类似。

Deep Dive

Chapters
The chapter discusses the evolution of AI, particularly the transformer architecture, and its impact on UX research and analysis, highlighting the advancements in natural language processing and visual understanding.
  • Transformer architecture has significantly improved natural language processing.
  • AI can now analyze and understand human interaction and UI design more effectively.
  • Google's AI model can understand UI design elements, a breakthrough for UX professionals.

Shownotes Transcript

Translations:
中文

This is the news Normal group U X podcast.

I am to be fascinating, and we're back with another episode about A I N U X A I hype has been rampant, but there are still so many open questions about IT, like how will A I impact our work? What can you x professionals do to avoid being replaced by boss? Is that something that buds are even currently capable of? Today we feature excerpts from two conversations that i've had about the latest developments in A I tools for U.

X. Research and analysis. We die sect. The hype from the legitimate risks, costs and benefits of A I. After all, A I dates all the way back to the fifties. So why are we suddenly hearing so much more about IT in recent years? To answer that, will first hear from savena hawkins, a senior U X. Researcher at meta and the co founder of an AI platform known as altis, which will soon talk about SHE, shares a bit about what developments have made A I today. Different from AI of the past .

before now its A I has very much been confined to narrow use cases, so IT was called narrow AI and IT be good at really, really niche things. So like image uh recognition, labelling things, facial detection, very sort of like small constraint tasks, but not a lot abroad. Start of view civil language or H A reasoning and things like that ah when there was reasoning IT IT was way back when we had expert systems.

So these were mostly sort of uh, logic design and sort of structural systems that you know didn't adapt well. The new context. And if you had something like a fluke of a car that trained on tons of data to drive in the states, but has brought to the U.

K. And doesn't really know how to handle adapting to driving on the other side of the road, just a good example of what A I used to be in the past. More recently, we had this incredible breakthrough in model architecture called the transformer architecture, which I highly recommend.

Uh, getting dirty about IT is is such a interesting innovation. But this is a lot so much in terms of specifically natural language processing, which before now was pretty limited. You know, very nice if I didn't have sort of the grass, but the, you know, details and the new answers and human communication, especially with things like two people talking, there are so many layers of meaning there.

I would never have been able to sort of understand how that process works and understand that the way that a human would understand that. But with these new technologies were really seeing just the amount of understanding that the AI is capable of simulating, so to speak, because you know what relly is understanding, right? But um the the amount that is able to sort of analyze and is still and understand in human interaction, human use sive language, it's just IT is way more than IT has ever been at any point in time with A I sisty.

So this is of course really exciting because this means now A I can you write creatively, I can generate content really fast. I can analyze things really um which is stuff that I really couldn't do before. But you know people who were trying to do IT were being called crazy and maybe up until like a year two ago.

Uh, so IT is it's really interesting to see those breakthroughs in particular, these breakthroughs really are closed to home for our work because so much of what we do is talking to people, understanding people's needs and so much of humanity. All we're doing really is this discourse, which is all you know really grounded in language and non a verbal communication, is well uh that this to the transformer picture uh has ramifications also for other types of models. So vision based models, um all of that stuff, the encoder and decoder and how those kind of like the inside of the model looks to architecture and how that works.

Uh there's some really cool projects being done. So for example, google came out with a model that actually is specifically engineer to understand U I design. So IT can take in you know a screen and basically understand where where the buttons are and you understand sort of what's being laid out information wise in the way that a human would be able to understand that if they were looking at that interface and that stuff that was not at all possible until the last year. So you can see how these specific innovations around natural language processing about multi model AI, you know, screen understanding all of this is so placable specifically to our fields and is very much can revolutionize the way that we do our work.

Oh, totally. And that's so fascinating to think even within the year, uh, that there has been some of this pretty incredible progress in the models, capabilities and visual processing in language processing. And I think that's amazing when you think about how professionals like us you act, professionals like us can ultimately get work done.

Now I guess the the flip side, we know a lot of what is possible. What would you say is maybe a particular saying that people should be a bit cautious about doing like or is there are a particular capability that isn't quite there yet that maybe need just a little bit more time or some other input to make IT ultimately Better? Uh, for our uses like in the case of U. X research or even A U X design .

yeah I would say it's, uh, when humans are solving problems, we use so many layers of our own intelligence people to make sure that were doing something correctly. So for example, when you're working on a project, you'll left to approach IT from partially a business lens, partially design humanistic lens and then maybe also social impact and broader, you know kind of uh, concerns about how technology impacts of the world that we all live in and share together. Uh, AI doesn't necessarily have all of those slayers just yet.

So it's very important for you to kind of view the tools that are working with as thought partners, but more as sort of junior thought partners that may be report to you that you really need to be in charge of, you know, making sure the quality is there, fact checking in making sure that the output actually make sense and not just sort of accepting what is saying at a really, really high level, surface level. One example is who lucina's everyone's heard of them at this point. It's actually i'm not a fan of that language around IT just because IT implies that something is made up.

Uh, what's actually happening underneath the surface is the A I is generating what is probably likely a totally likely set of you words to be strong together, given what is seen in its train data. But AI doesn't have a sense of, you know what a fact is like. These are not parts of that architecture, so it's stringing words together in A A way that is probable.

But you know the what had seen with the internet data is all sorts of stuff. So misinformation and crazy opinions, ungrounded stuff, career of writing at seen, all of these are of different ways to use language. So it's not IT doesn't always have the ability of differences between what's fact based and completely logically reasoned versus what is something that maybe you know the words work together in a way that makes sense given the training data patterns. But it's you know not necessarily IT doesn't work work tapping sense.

Clearly, large language models have improved significantly, which is unlocked of many new capabilities, but there are still limitations, including those hlubis ation that have mentioned. But exactly how detrimental are some of these limitations? And what else do companies need to consider before running full speed at these AI solutions? To tackle that, we hear from energy user experience specialist and former computational .

neuroscientist Caleb pony IT is easy to imagine the positive impact of N A, I application. Kind of in isolation or an evacuation like IT would be great if I didn't have to write all my emails, or IT would be great if I didn't have to write out by hand my entire research partner, something like that in a vacuum. great.

Oh, amazing. right? Uh, no business decision is made in a vacuum, whether on the whether or on an individual level, on an organization wide level.

And so when we decide, hey, we're going to use A I to accelerate our scoping process for um our consulting business or something like that, this is completely have otherwise. And then it's like, okay, what's the cost associated doing that? Large language models, in particular, take skill to use well and use effectively.

There's a low barrier to entry in that you don't need to know a programme language beyond english to start interacting with that. But in order to get a high approach, the ceiling of performance from these models, you need to be able to write effective instructions or prompts to these models. And so there's time there's A A similar amount of time trade off of automating something as he would as a programmer, right?

The time IT takes to automate something will be longer than if you just do IT yourself for the first time or once, or even twice, or even three times, right? And so you have to say, oh, what how long will you take for me to automate something and then maybe maintain IT and fix the bugs and implemented right? And and see the results verses, can I just do IT myself, or can I ask someone else to do IT?

Time and time again, we have to evaluate and way this baLance between being innovative in our practice, but also being a bit more cautious and mindful of hype versus reality, which i'm sure many of you have seen the memes of, like people sprinkling AI on everything, AI chat on everything. And while there are certainly going to be really great use cases and we already see amazing use cases for AI and productivity, how much hype? I rather, how much of this do you think is hype and how much of this do you think is legitimate?

An application that jumps to mind for lots of people in terms of a consumer level benefit of A I is search are things like doing in the U. X. world.

We called desk research, secondary research, right? I want to learn more about. It's a contextual inquiry. Some people might encourage me to use a large language model to learn more about that method.

So I might go to, let's say, google, german, I and say, hey, what can you provide me with an explanation for what conventual inquiries and can you give me some sources or some glens to, uh, for further reading for that right now I did this. I asked germany for for this exact piece of information. And what I got were five perfectly formatted citations for White papers, uh, written by various people through various organizations.

Energy included on contextual inquiry. great. None of them existed. They were completely halcon ated.

They were comprehensible and that they looked like text strong together that I could read. But they were by authors who had never written for energy. They were titles of White papers that didn't exit.

As soon as I started to look for the source of the actual information, not just take you out its word, things start to fall apart. Any time you ask a tool to fill out a research plan or draft an email, or tell me about contextual inquiry, some other U S, U. Search method, IT is a company upon me to check to make sure that all of that information is giving me is correct. I have a budget that in to my use of the tool, and that's an associated cost.

Clearly, A I is not perfect, but oneself less. The hype can often make IT hard to tell whether the end is truly near for the role of the human U. X.

Professional, especially when layoff s seem to become an increasingly common in our space, will AI soon steal our jobs is time ticking until we will become redundant. And just how vulnerable are we as U. X professionals?

Stakeholders don't need a legitimate reason to fire you. They just need a reason AI doesn't need to work in order for them to use IT as a justification to get rid of you. That's the scary thing.

There are too many new stories to count about companies cut in headcounts. Maybe two years ago, I was a complaint about interest rates. A year ago, I was and macroeconomic conditions generally these days IT might be a stated abolishes about artificial intelligence.

They're cutting back out for all these ten reasons. It's it's it's a expectation that the workers that leftover will be able to do the same amount of work or more um than everyone who was there, right? If we look at that claim about A I specifically and let's focus on on the U X community, it's unfounded.

We talk about in the course about both design and research and. In regards to research, you can ask an A I to generate research data. You can do IT, uh, it's honda and bad and you shouldn't do IT.

It's essentially as it's essentially researching all of human knowledge, not people who use your product. And you can imagine the differences are not completely but difference, right? additionally. Large language models, and this is more a philosophical take, but I think it's true. Large language models are built and trained upon human knowledge text, comprehensible text, that has been created up until this point in time.

Research is about expLoring the world, the no universe, finding the edge of that known universe, staring into the unknown, the abyss ahead of us, and taking a tiny little step into the void. And that act is one that is extremely difficult, resort, intensive and unique. And I have no extensions that A I could never do IT.

I can't do IT now, and there's no indication that I will be able to. So that's research in regards to design. I want to be clear. Many design disciples are being disrupted by A I right now. Many, many, many are. There are A I tools that are accelerating and replacing steps and workplace for graphic designers, for um uh people make uh place all our content uh stock photographers, those those disciples are being disrupted.

U X design is reliable, ably unique in that based on our research, the work that designers do and you have designers specifically do, is still safe from A I that IT has not made in roads in the ways that IT has with other design visual disciplines. Again, not to say I won't, but IT has not. And that speaks to, in the same way, some of the very kind of unique things that you are, you designers do not, at least of which the work that youth designers do to communicate cross functionally, a rounded design, most the most valuable work that's done for york designer is not make the pretty thing, but to transform data into a visual representation, garner feedback in iterate, ate and then implement that. Um it's not just about how good the thing looks. It's about everything about.

What really is the job of someone working in U. X. Are we just our workload? Are we just vote data processors? Or is there's something more than that, that we're doing and i'm full on in the camp of there's a lot more that we're doing than just qualities of data analysis or using fig ma this.

There's just a lot happening that's not just using the tools are not doing these workshops. Um so I definitely uh, I think that if anything, these are improved technologies that allow us to do more work faster, which as we all know, uh, there's just an abundance of opportunities for where U. X research and U X. Thinking is needed throughout the company. And so we're often really just sort of blocked up inside of product because product has the most desperate need for our type of thinking.

But if you collaborate with sales or customer support or marketing, there's just so many different surfaces of a company that touch the customer but often don't get the same level of thought and analysis that we bring when we work on actual product uh, stuff because you know that makes sense that that's the the place if you have limited resources for ux and you have to deploy them somewhere, that absolutely makes sense. But with um with the ability to amplify how much work we're doing and how quickly we're doing IT, I just see a future in which were more embedded without the entire organization. And rather than just being sort of linked directly with product and design, we become more of the institutional knowledge keepers, so to speak, of knowing who the customer is in helping sort of design customer city into every aspect of these organizations that we work in, which is so much more than just product, is also marketing, is also know sales and also support because there are so many of those interfaces throughout the whole product.

So um if anyone is listening and and is worried about the the future of their career, things are going to change for sure. I think no one would, I would guess, otherwise at this juncture. But I think that is really very much a great future ahead for us about just being able to do more different types of research work and be able to help more people and organizations.

The good news is humans are not only relevant but necessary. Still, it's not enough to just keep doing what we've always done because the competitive landscape is changing very quickly. I asked both Caleb and savena, what kinds of skills should U. X. Professionals be building in order to remain relevant in the age of ai?

You actually kind of need to think about how you're collaborating with the A I is sort of, uh, a separate skill. And there's actually a name for IT, which is called prompting or prompt design. Prom engineering is another turn that IT goes by. All of this is sort of the art of directing the AI to get IT to do what you want IT to do um so instructing IT to sort of angle in a specific way, giving IT feedback and you know kind of how you work back and forth that A I is totally its own skill and art form and we as I feel they're really, really prime to do well at this because we are the art of the interview is so full of how do you direct someone else to kind of you lead them into these new territories and make sure that they are going in this direction and not that direction or intentionally being sort of um you know open endings that you don't inadvertently guide them somewhere in this sort of the trade off between and artfully using the a guided the questions in england verses keeping things open ended to prevent bias. All of that stuff is completely applicable to how you work with A I because even in those little details of the words you're using to prompt the I IT is those all Carry weights in how the model works in the background.

So the words that you use to ask the question, statistically speaking, bias C I and it's IT can be hard for folks outside of our field, I think, to really understand why that's bad because also that the sort of effects of acting against incorrect information happens so far after decision already made. So a sort of downstream people kind of forget about IT. They're so of this, uh, not a perfect sort of connection between, okay, we made this decision because of this information, which turned out to be wrong.

You can be hard to kind of make those connections. And I think that's one of the things that our field is really post to help do is just make sure that good decisions are being made and that how we leverage past information and is is brought back into the current context of know maybe this was uh, interviews conducted before coffee. Post copied world is totally different now.

And so a lot of the things that people said in those interviews can actually be applied to this particular uh, new context or we're looking at something new. So um those sorts of skills and be able to contextualize data and making sure that it's actually being used in leverage in a way that's gna support humans real needs. And then also uh but also be sort of faithful to the truth of the situation. There's there's such an art to that. And I I I think we are just going to be more relevant in the future being to managed that process.

The way that we react to AI is not so different from any other tool or disruption. I like I want to emphasize as much as I can that the the instinct that we have about how to demonstrate value and contribute to organizations, and like clearly show impact and contribute to impact, there is the same in the world of A I too, right? It's, it's, it's A A new set of tools that can be applied in particular ways.

But if you weren't, you know if if you want to able to evaluate what matters in the business and shape your behavior in the type of work you do to a line with that before A I like you were your job was maybe ten of us. Before A I as IT is now right, i'm not saying the rest cousin increase, but IT, we have a bank of knowledge about how to do good work and how to communicate the impact of that work. It's difficult.

N U X, but we have a lot of knowledge about how to do that. You don't have to throw that knowledge of the window when IT comes to A I. It's always the same context that I mean that in some ways that to vote of confidence to us that you don't have to reinvent the wheel, the wheel still on the cart, it's maybe just there's a different horse, the front or something, you're holding a different care. I don't know something.

Yeah, yeah. But IT .

now has all weather tires.

Here go.

whether we like IT or not. A I is certainly here to stay, and IT can be a useful tool to increase our productivity and keep ourselves from getting off behind. In fact, that core need to improve research quality and speed is exactly why vena started her company.

Altis, you've been doing a lot of really cool things with altis. Can you tell us a little bit about that and what kind of prompted all this uh, exploration for you? yes. Um so necessary .

the mother of all invention and sometimes prototyping your own solutions, you end up with something really interesting. So uh altis is one of those uh altis uh, I basically way back when when I was an event break, there is just this need for event break customers to figure out how to survive coit because you know event rate is all about in personal events.

Someone cover hit that completely collapsed a lot of small businesses and how they were able to stay a float. So I was just really passionate about helping these small businesses survive. And so I needed to figure a way to scale the work that I was doing and interview more people and be able to turn around inside faster.

And that really because this path of prototyping that ended up leading to the question of whether or not A, I actually make people to automate full chunks of our work. So, for example, the quality of analysis process after doing the interviews and sort of short circuit that process that we Normally do, which is very, very manual, very detail oriented. See if you could just uploaded IT entirely to the AI um and the answer ends up being yes, the I actually can do you stuff that we would Normally think only you know humans are able to do.

But the technology these days is really fantastic. So um obviously with our system and we wanted IT to be a serious research tool. And so far to be a serious research tool, IT has to be willing to teach you things that maybe you are totally in the opposite camp about IT, but the fact, say the other things.

So IT needs to be able to substantiate its perspective and then also articulate why this particular perspective might make more sense than something else. So um all of this has been built into the architecture of our system. So just some examples of uh ways in which we've brought this to life is uh so the Normal AI systems don't really know the difference between a overbear quote and influence.

Everything you is is doing is in in fact it's an inference engine so IT doesn't really know the difference between a rebeca piece of fact and something that's inference because everything is doing is is influence effectively. So one thing that we built in is that when it's reasoning using pieces of data, IT actually knows the difference between over beden quote and something that could be inferred based on something someone said. And this is of course really important because we are naturally, when humans interpreting, uh, this course, we know how to make that that distinction.

We know the difference between someone said this explicit universes, they said this therefore can be informed that X, Y, Z. But this is not sort of a logical thinking that's going on in the background of the large language models. It's really just kind of connecting strings of probabilities. Ally, relevant content is producing content rather than producing knowledge or producing information.

So our system is very much designed to work within that sort of we're trying to produce knowledge or trying to produce information that's actually useful and relevant um and of course, with anything quality of there's always many interpretations that are possible um and we've even built that into the system as well. Word explore several different interpretations and actually decides on which one is the most plausible one of either represents many to you and shows you the different options for how you could interpret something or shows you the sort of most um coherent interpretation that's available. Um and it's sort of these these sort of ways of thinking that are not out of the box for A I what soever, but are.

Very much in transit t to how are approaching solving a lot of these problems. By the time folks are listening to this, uh, we will have opened up new space for design partners. So anyone can actually go to the artist website, sign up, learn how to use IT, get a demo and actually start to use IT immediately to some of the kind of easy use cases to think about is just um right now is very much optimized for analyzing conversations, but any kind of conversation data.

So it's um it's very broadly usable. It's also really only designed for folks who already have a background and research and kind of know what they're doing regarding the research work low, almost like getting your own such specialized research assistant so that you can just do your work faster and you more effortlessly than you would before. So for me, uh, one of the things I love about using IT in actual research work is you can delegate out the sort of like rigorous design, not just started the rigorous ous coding process for all of the interviews, and then be able to sort of the information the way that you want to and do the story telling at the end.

Um the other thing i've liked about IT to is um a lot of the customers to actually been helping us you prototype ID and have been using IT on with their actual jobs often that I is going to teach you at least one or two things that you did not see when you are looking at the data. And that's just by virtue of that. When two humans look at the same data set is always there always going to have slightly different approaches.

No one will always see the exact same thing in the AI is sort of many people in one. And the sense that it's you know, it's always approaching things with a new perspective because IT IT doesn't have sort of a specific memory in the way that humans do where everything is all know you can change that you were the same person who read the other three interviews. So you are, by the fact that you read the other two first, has now biased how you are going to intake this sort of coating process for the next couple of interviews that you're reading versus the AI actually doesn't have that where it's every time it's looking at the data, the way the system is designed sort of starting from scratch.

So it's uh, because of that IT IT lacks spices that we have. Of course, IT also has unfortunate biases, so know the human can be removed from this process because the AI is going to be bias in negative ways as well. But uh, it's sort of beginner mind of how IT approaches everything, leads to sort of new and novel connections that are gonna be different than the ones that you naturally came to and sort of organizing those things together of like what are the unique insights that use the human doing this project came up with and integrating IT with the new perspectives.

And you can get from working with A I um I just can not lock a lot. Obviously a part of designing the system. There is always opportunities to test IT while doing research for other projects. And um some of things that is unlocked for me is uh day over day iteration cycles with design where you could have during a design sprint, you run interviews on one day and then before the next morning out this overnight analyzes all the findings and delivers the results the team can review them going into the next day of the design space.

So that sort of like rapid, rapid iteration cycles, uh, it's just going to ten eggs how much we can do in these design sprint and be able to work with teammates more quickly and sort of more in a more aggie way. Um all of that I find so so a so exciting one of the most I think humbling things about IT too is uh often when IT does its output, IT very frequently comes up with something that's Better than what I personally was going to write, which is humbling but also really exciting because IT just shows you it's IT is so much Better when it's working with you and then you are so much Better when you're leveraging what is good at. So human memory has such intense limitations.

There is no person who is never gets tired like never forgets things. These are just parts of being human that make us know just part of the design of how we work as beings versus the A I know IT IT will sit there and analyze crazy amounts of data. And the last data point is reading IT has the same sort of, you know, level of attention as I did with the first point.

So I never IT never gets tired. It's not um it's just it's able to make connections in a way that's almost the exact opposite of how we're able to function. So human intuition and bringing together kind of like the dots and all of that stuff works really well with a ability and also bring together the pieces and kind of analysis scale. So I think we're definitely complimentary systems and work Better together more than anything else.

That was a vena hawkins, cofounder of altis. Now companies like altis are leading the way in changing the landscape of U. X.

work. And regardless if you're doing research or design, there is a tone of opportunity to use these tools wisely and effectively. But if you're unsure where to start canap as a few suggestions on how to stay informed about whats new in AI.

but I would recommend is engaging with the core large language model experience in small ways. Fortunately, because many leading A I companies are either running on vast venture capital backed budgets or they are being run by monopolized tech companies, the most advanced large languish model products are available for free. And so what I would recommend is.

When you're doing a task that feels as though IT maybe could be made a little bit easier. And whenever it's a writing task, that's a very easy thing to try. If it's any sort of quantitative analysis or even quantity analysis, I would argue that IT is worth the extra time and effort to just see what an AI does with that, see what an A I told us with that IT.

Probably in the reason you're doing this is not to get extra efficiency out of bit, but to kind of check in to see how these tools are doing, check in to see like, oh, could I integrated into my work? Or is IT not ready yet? right? You can easily subscribe to too many daily news, letters, email newsletters and be indicated and overwhelmed.

That's my job. That's not yours job. So what I would is subscribing to three sources are kind of weekly information. So like one of them would be just kind of the newsletter equivalent of a newspaper, right? Uh, there is a bunch out there.

But any sort of weekly update on the state of the AI industry just to kind of keep up with what's happening? Nothing happens. Data day that is so important that you need to know about IT weekly is fine.

You can even go on monthly if you really wanna do, and you're not gonna miss much, especially when IT comes to actual applications to our work. And then reassuring, no, I I fully believe that uh, prints IT record IT whatever. I do not think the AI industry moves so fast that you need to be on top of a daily or even weekly. Additionally, I recommends subscribing to a optimistic perspective on A I and skeptical perspective on A I.

There's a lot of writers out there and what optimistic and skeptical means to you will be different, but there are plenty of writers out there that are messing with a daily and trying different things and testing out new things and kind of really pushing to see are what's the ceiling on what we can do and what we can build and how we can make our work Better. And then there's a just as important, if not more important, group of writers that are a delivering informed critics of the technology, of the industry, of the communication and of the custom risks associated with A I and I maintain that IT is important to be exposed to both sides of both of those perspectives at the very least. And if you have those three, that weekly news, optimistic and skeptical perspective, your good, you're golden.

That was Caleb phone, I, U, X specialist at nelsen Normal group. We've included Caleb p's favorite sources of A I information in the show notes. But if you want to do an even deeper dive into how to use A I effectively for U X work, Caleb is currently teaching a full day course with meos enormous group called practical A I for U.

X professionals. More information about that course can be found on our website, along with many free articles and videos on U. X and A I.

But to close, if I were to summarize the chats that I had with Caleb and savena into a single takeaway, IT would probably be this. A I is still evolving rapidly and Carries immense amount of risk. Every person working in this space when t hesitate to tell you that there are a lot of issues still left to resolve.

However, using these tools as thought partners with a healthy dose of skepticism for the outputs, that's a strategy that will help you manage risk and create Better solutions for your teams. Thank you again to kaleb spoon and savena hawkins for their thoughts and for their willingness to share with our community. This episode was hosted by me to Rebecca and produced by the incredibly talented Chris Richardson.

For even more free U. X. Content, including articles and videos, or for information about our upcoming courses, check out W W W dot N N grew dot com.

And if you like, this show needs leveraging and subscribe or follow on your platform of choice. That's IT for today's until next time. Remember, keep IT simple.