Hello and welcome to another episode of our podcast, the future of UX, where we dive into the fascinating world of user experience and product design. Today, I am super excited to host Emily Campbell. She's a true leader in the field of design and AI. And Emily has been pioneering a project called Shapes of AI, exploring the intersection of AI and design.
And I personally must admit, I am so happy to have Emily with us. She's an incredibly intelligent human being and I absolutely love her contributions on LinkedIn and other social media platforms. I think she's super inspiring.
And I think what really sets Emily apart is her openness about her learning journey as well. Rather than just like claiming to know everything at all, she shares her insights and discoveries, which is simply brilliant. So I'm a huge fan, as you can hear, and I found our conversation super inspiring. And honestly, I...
Yeah, the conversation could have gone on forever, I feel. So stay tuned. You won't want to miss out on this amazing episode with Emily Kemper. So enjoy. Hi, Emily. Welcome to the future of your ex. Hi, thanks so much for having me.
Of course, I'm happy that you're taking the time. Super excited to talk to you about AI. Before we are diving into the topic, please do a quick introduction, talk a little about yourself and who you are.
Sure, absolutely. So I am about a 15-year veteran of tech. I've been leading design teams for some time now, and I'm currently serving as a fractional design leader where I'm helping design teams grow in maturity, start to figure out how we can approach our craft in this world of AI, and of course, helping companies overall raise the impact of design within their companies. Mm-hmm.
Awesome. And you're also focusing a lot on AI, right? You're sharing a lot of articles around that topic. You have a lot of really interesting thoughts on these topics. And I think especially at the moment, it's very important to have design leaders sharing expertise and thoughts about these topics. So maybe you can tell the listeners a little bit about the things that you are doing in the AI space as well.
Sure, absolutely. So about a year and a half ago, I started working with a team that was focused on AI products. I was working closely with our head of data science, engineering leaders, engineers.
And starting to explore how this technology works and what use cases might make sense for our user base. This is while I was leading design and research at Bender. And that alone was such a fascinating opportunity to come to this technology with an open mind, a generative mindset and say, where is the value here and how does this work?
And then as I go into this year, I've continued that curiosity and it started to raise additional questions for me along the lines of what are the other use cases I'm not seeing? How do our practices as designers and people building these software tools need to evolve both with technology?
this technology, but also for this technology? And what are we already seeing? Because we're in this moment where things are changing very rapidly and our critical discourse to understand it, let alone plan for it, is just lagging a little bit. And so that's been sort of the impetus behind my project Shape of AI, some of my writing, and of course, the work that I've been doing with the teams I've been working with as well.
Awesome. Can you talk a little bit about the results that you got? You talked about all these questions. So what are some of the insights that you got out of? I know there are no final answers, of course, but some of the learnings you had on the way? Sure. So in terms of overall adoption and just the overall space, a couple of things that I'm seeing. The first is AI is already evolving.
And of course, when we talk about AI today, many of us are thinking about the technologies that have emerged out of the transformers of GPT and natural language processing. But of course, AI has been with us for decades. Most people don't know they're using AI day to day. And so that's one interesting finding is the number of people who self-report that they use AI and interact with it. User perception is not aligned to reality. Additionally,
On the other side of the coin, perhaps, we're seeing that most of the implementations of generative AI, it tends to be, I'm just going to call it at the surface level, almost gimmicks. We're seeing a lot of, I want to show you that my company does AI, so I'm going to put it in there. I've used the comparison to, if you've ever seen the show Portlandio, where they're like, put a bird on it. It makes it cool. We've got this put AI on it moment in the industry. Oh, yeah.
And so we have this world where, on the one hand, people are interacting with this technology in a way that's comfortable and useful. And yet the solutions that we're building aren't necessarily useful or clearly valuable. There's a gap there. And so I think as designers, what I'm identifying in the patterns we're using and the ways we're introducing it in our interfaces is
We are still incentivized to go for the gimmick and to just sort of put it out there. And I'm not seeing as many...
This is changing quickly, but I have not seen as many examples where an implementation of AI seems very integrated into a user journey. I'm sure we'll go into some of the patterns I've been seeing and documenting, but I think that's the big takeaway is there is a clear opportunity here. People understand this technology can bring value to their lives when they're made aware of it.
but the way we're introducing it in a more observable way is maybe not quite there yet. So I don't really know what to make of that, but it's definitely both in workplace products as well as in the home front, a pretty common pattern I'm seeing. And why do you think it is like this? Gosh, any number of reasons. And as I started to look at this, I've used this term, the shape of AI,
as the name of my project of just kind of starting to put a name to the different patterns at the interface level that we're seeing. And I started with the interface because it's what is most visible to us. But those reflect patterns at the business level, at the incentive level, at the market level. So...
To answer your question of why I think it's this way, the first is that investors are valuing companies that introduce generative AI over companies that don't. There's absolutely a multiplier effect right now, and we're only just starting to see that bubble begin to burst. We're seeing earnings reports come in below what's expected. We're seeing some of the big companies...
start to walk back some of their revenue goals related to AI specifically. So I think we're coming out of this hype cycle where companies have wanted to stand out or at least meet the competition by showing AI, by making it very visible in the interface.
And when that's the incentive that's driving your decisions, you're thinking less about the people using the product and what they're actually trying to accomplish and how this technology helps them accomplish it. And instead, you're thinking about AI from almost a marketing perspective. But I think as this bubble bursts, we're going to see that wave wash over and we're going to see more and more of AI as a value add. And when you look at the numbers in the data, it's
There was a Pew Research survey that came out last month, last March, looking at the ways that people who have used AI, how their adoption patterns are changing. And we're seeing already large jumps to high ones to low double-digit gains in people adopting generative AI into their work streams, particularly at work.
There's data, Ethan Mollick shared some data recently on Twitter of these two papers that are showing people who use AI in their writing, people who use AI for code type tasks, anything that's this sort of repetitive type task,
major gains in both desirability of these functionalities and also the usability of the functionality. So I think we're just starting to see the market slow down a little bit. And it's an interesting moment for us as designers because we're coming to that point where companies are going to realize if we're going to truly
break free of the competition. If we're going to truly make this something people want, we have to focus on the people and not the technology and not the catchphrases and lean into that best practice, you know, game changing, oh my goodness, human centered design. But I think we're going to see that become, you know, a priority and focus again. Awesome. Good for us, you ex-designers. Exactly the skills that companies need.
go back to the user, go back to the customers, understand the needs, pains, goals, and then think about does AI really make sense to be integrated. You can still have the fancy features to attract the investors maybe here and there, but if it changes the whole experience, probably not. If it's not changing it to the best, then it's probably not a good solution, right?
Yeah, our product. Yeah, and I think there's going to be an interesting moment as well where we have to understand the right way to introduce this to the people who are the late comers to the technology because there is a lot of hesitation to adopt a
AI, there's fear. Some of that fear is very well grounded in what happens to your data and what information these models are trained on. Some of the fear is perhaps misplaced or aligned to how it'll affect me and my job. And if I use this, am I contributing to the system that removes my job? There's any number of consumer patterns, consumer resistance and reactions that are also going to influence how these products look.
What I imagine we'll see is for people who come to a product with no expectation of what this technology is and why it's here and how it can help my life, we'll continue to see those kind of surface level, what are you looking for today? Talk to me, tell me what you're thinking. But I think, and we're already seeing this with Microsoft Copilot, we're seeing this with the deep integration of AI into financial modeling tools and business tools. We're seeing it with other Copilot tools like GitHub, where
When AI can help you perform a task that you have to do already, and it goes faster, and it's accurate, and now you can move on to something else in your day, you're not just doing busy work, that is a gotcha moment. That's when people say, oh, oh, I see. This isn't just here as a gimmick. This is here to actually improve my day-to-day life. And so the more people get introduced to those aha moments, the more we're going to start to see those incentive patterns shift.
I think this is super fascinating, right? Because some of those things might happen in the background. You don't really notice that it's AI. You just recognize, oh, something is happening that is helping me. You feel like enlightened, but you don't really know that this is AI. I think this will happen probably eventually. At the moment, you see all the typical AI emoji icon with the little spark everywhere, basically.
But just to show that it's AI, but you wouldn't really need it, right? So in the end, it's all about the result that users create. Yeah, I think super interesting. But when it comes to patterns and designing AI products, I mean, you mentioned a few patterns already in the last minutes, but what are the common patterns to use as a designer?
How does it differ also from like traditional design patterns or design UX roles that you are familiar with as a designer? Yeah, absolutely. And I'm going to focus as I talk about AI again in this sort of new layer of AI. So generative AI combined with traditional things like deep learning, machine learning, you know, all of these encompass AI completely.
But now that you have the ability to communicate directly with the model through natural language processing, and again, these co-pilots and transformer models that are available to us, that's where we're seeing the change. And so let me first describe how user interactivity is changing with these products, and then that's how that's affecting the interface level. So up until recently, you know, any of us who have been doing this for the last 10 years
We're aware of the things we use to evaluate how our designs perform. Things like how many clicks does it take for somebody to get to some result, right? And so much of our work in information architecture and hierarchy and interaction design has resulted in this hub and spoke model. I, as a user, go to the navigation. I do this thing that takes me to this other thing and gets me to what I'm looking for.
But with this new layer of generative processing, that framework shifts. I no longer have to click multiple times to get what I'm looking for. I have to ask the machine what I'm looking for and then see how well I asked to get the response that I actually want. And sometimes that's very explicit, like in the chatbot scenario. Sometimes it's more implicit, where I might use suggestions from the model itself, make it shorter, change the tone.
But it's still the sense and respond model that we're moving to. And so as I start to think about how to interpret the patterns we're seeing, that's what's in my head. How many inputs do I have to use as a user to get a result that satisfies my need, that I trust and that's accurate and that I can move forward with into whatever it is I'm trying to do next? So
As I've laid out my own little pattern language of what I'm observing, that's what I've followed. So the first thing is I need to identify how to...
How to ping the model, how to introduce my prompt. Maybe it is you mentioned like the sparkly emojis. Oh, there's AI here. I can do something different. Maybe it's the gigantic, what are you looking for today, input bar. Or it might be as subtle as a icon or some trigger that lets me summarize some data table, for example. So I need to identify it. Then I need to be able to input my data.
my prompt and I'm going to use prompt again to kind of cover anything that's the user suggesting to the model, to the system, to the program that they want something back.
and I need to be able to make sure that it's likely to get me what I want. So we've got these patterns that are guiding people to good responses, suggested questions you might ask, the starts of sentences so that you can finish them. You don't have to think. You don't have that open canvas problem that we face if you're just met with a blank page. Then I have to be able to get something back. What do those results look like? How do I interpret them? How do I know how likely they are to be accurate?
And then I need to be able to tune it. So I've got this pattern set that I call tuners. Okay, I got a response back, but how can I change it? Do I even know that I can change it? If I don't like the response,
It's not the only thing available to me is not putting the thumbs up sign down. I can say this isn't right. This isn't what I wanted. You're thinking about this wrong. You're not giving it back to me in the way I asked for. And then finally, I need to know if I can trust the results, not just in terms of accuracy and how do I handle things like hallucinations, but what are you trained on? Does this include proprietary information? Am I going to get in trouble if I use this in my work?
And so that is this sort of, I mean, I'm laying it out linearly, but it's really like a system dynamic. I'm interacting with the system as a user. And then of course, we have to recognize that most people
they're not technical, they're not systems thinkers at the same level. So finally to your question of how does this relate to existing patterns? This isn't an entire rethink of user experience. This is taking what we already know as best practices of user experience and applying this level of almost service design on top. What is someone trying to accomplish and how can we give them better clues, more details, and more control so that they feel like they're driving the system?
Awesome. So moving a little bit out, looking everything from the outside, basically super interesting, very inspiring. How do you get to, how do you get to like, does the research look different basically for these kinds of things? Or do you do qualitative interviews or like, how does it look like?
I think there's probably different types of thinking about research depending on the opportunity you're identifying. And I've started to share a framework of different ways that AI is layered on top of our experience. So there are implementations of AI that are adaptive. They
They take an existing system and they allow you to adjust it in ways that maybe you didn't have the control to do before. You know, I can take a...
a data set, for example, or a gong recording. And I can say, give this to me in a different mode, give this to me in a summary, or if there's some course, provide this to me in a different type, a different modality. I'm not in a position where I can read text, give this to me as a video.
And so they're understanding the ways that people might want to interact with the system, with the interface and with the content in ways that are separate from what they currently can do. That's one bucket. And I'll kind of touch on these and then I'll go a little deeper. Then we have analytics.
Not adaptive, but AI in a way that augments the experience. So it's a little bit different. It's not like, let me see this in a different way. It's let me do things I didn't even know I could do. So for example, I'm maybe working on a notion doc and I get stuck.
and I'm trying to define design criteria for some problem statement, I can say, hey, can you write out 10 different ways you might interpret design criteria or ideate on this? What are 10 different ways I could approach this just to unblock my brain? Let me work with you as a peer. I think co-pilots,
particularly GitHub's co-pilot is another great example where I have the sort of adaptive nature, which is summarize this code, tell me what it's doing, give it to me in a different mode. But I also can say, rewrite this for me. How can I make this better?
It gives me superpowers that I don't have. I'm not a programmer. I've used Copilot to write highly functional, I think, JavaScript and create things that were completely unavailable to me before. And then finally we have the integrated AI, which is like AI as the product. And that's what I think of as the chatbots themselves, the products that purely exist to let me interact with the model. So
There's probably more, but those are the three big buckets that I'm seeing. In terms of the research and how we think about approaching these questions, we have to come back to what is the person trying to do in this system? What is the most appropriate way to introduce AI in a way that generates value to them? Maybe it's a few of these.
And what does better look like? How do we define a good outcome? And that helps us get away from the gimmicks. So if I'm working on some product that allows me to summarize text, it may be insufficient to simply say, you know, give me a summary. Great. That might be a great way to create value for the user, but it's not going to be fundamentally differentiable. It's not going to do something maybe I couldn't do with some other technique. But
What context of my usage of that product could help make that summary even more applicable to me? What resources might it suggest from the internals of my company archive? So it's not just summarizing some team all hands, but also giving me direct links to
everything in our confluence that might relate to it. Maybe it's providing next steps that I as a manager can use for my team based off of my role, who I manage, and how this presentation relates to me. So thinking about it, talking with users, understanding what people are trying to do that they can't easily do today, and then ideating on how AI could bridge that gap
That's how you start to use it to generate value and not just give you gimmicky services that are useful in the moment, but really don't fundamentally change my understanding of what this product does and why it's useful to me. Yeah, I think that makes total sense, right? And I think it's so important that you are mentioning that and emphasizing that, that I feel a lot of people haven't really understood
how AI is really used for products, right? Because when you talk to people or also when I'm talking to clients or when I'm seeing posts on LinkedIn, it's all about like, how can we integrate tiny AI solutions in our products? Let's find something. Let's find something. Not so much
about, okay, let's go back to the user problems and seeing what's their goal and how can we strategically integrate it. Maybe it's not something that we integrate today, but that we have on our roadmap and like a long-term future vision for AI, I think. Super fascinating. I mean, you're also sharing a lot of articles and resources around AI in general, right? So I can highly recommend to check that out for everyone who's listening.
And tell us a little bit about Shapes of AI. What can people expect from this? This is a whole website where you gather all the information. So what can people expect from this website?
Well, as you might imagine, I have big ambitions and now I have to find the focus to actually see them through in between all of the things that keep catching my curiosity about this space. But a couple of things that are currently in process, if people are interested in learning more. So as you mentioned, shapeof.ai is the website. I put that out about two months ago.
And I'm already starting to work on some additions to that site. So some of the things I've heard people express an interest in. The first is looking at existing AI products and mapping the products, the patterns that I'm seeing to that product. How can we start to see the connection of this abstraction of the user experience of AI against the actual product?
let's presume value-driven implementations that we're seeing in the wild. So that's something that I'm currently working on and I'm hoping to have a collection that can be public of screenshots and flows. I already include those with every single pattern, but I've been having people share a ton of suggestions with me
me. And so my intent is to provide that in a catalog form. So thank goodness for Webflow, they make that so easy for me. And now it's just a matter of sort of hitting go on that. So that's something that I'm currently working on. I'm also working on some courses and some ways that people can take this into their own work, into their own day-to-day work.
And again, trying to approach this from the perspective of a designer, what is the thing that's actually standing in the way of people adopting this thinking, working with these tools? And what keeps standing out to me is not a deep dive into technology.
LLMs and GPT, I know that you're working on some courses as well, but it's how does this actually integrate into my day-to-day life? How can I start using these tools in a way that I can see the value and then start to think about what does this actually mean for how I approach solving problems for other people?
And I just think there's a lot of conversation that needs to happen. So maybe if it's not in the form of a course, certainly conversations. I've had an opportunity to speak to many people. And then beginning next week, I'm hoping to actually put this public today, today being Earth Day, April 22nd. So timestamp that one for accountability.
to start our open office hours. So we'll hopefully every couple of weeks meet and just talk about what we're seeing. And I also have some incredible experts from the field, recruiters in AI, people working for AI products with children, and thinking about how this aligns with accessibility. What does it look like to design agents and character design? There's a number of folks who have been open to meeting with the community and sharing their own experience. And so we'll be having those roll out soon. So
Those are some of the big ones. And then finally, I have a newsletter on LinkedIn. It's also on Substack, Shape of AI, where I am sharing regular thoughts, what I'm seeing in terms of those resources, changes in the market, and just kind of
learning out loud because I certainly don't claim to have all the answers or even be the top most expert on this. But as I'm learning, I'm realizing many people have the same questions that I do. And so I'm trying to share as many of that, those thoughts out in the open as I can to spur other people to share what they're seeing as well.
Awesome. I love that. And I think it's a great approach, like learning together and not pretending like you're the 100% expert. Like no one is the expert at the moment because so many things are changing. So like learning together, I feel is the best approach, supporting each other and sharing the learnings, like the fails, the successes. This is what we need currently, I think, in the design community to really grow and take that next step of AI. Yeah.
Yeah. So from all the discussions that you had also with other people, maybe with your clients, what would you think are the important skills and methods that designers need at the moment to really have a successful future? What would you recommend? That question could take us into another hour.
Yeah, let me, I guess, break that into its two parts because there are things that need to change in how we practice design, both in terms of our processes, but also how we think about what we're producing, evaluate what we're producing, and ultimately produce great things.
So there's this question of how does our role within our teams and our organization and our process change? And then what can we be doing as individuals to make sure that we are resilient and positioning ourselves for success within those changes? So on that first question of what's going to change.
First of all, anything that is mundane, anything that is a task that can easily be automated, those are going to be eaten by AI and they're going to be eaten faster than you realize. And it's probably a wider collection of stuff than you realize. So I've actually started a list in my notes app on my phone. Every single time I find myself going through
AI could do this for me so much easier, even if I don't know what tool would do it or even if it doesn't exist. Or anytime I've used AI to complete some task, I've been taking note of it and that list is getting really, really long.
The good news is, is there not, there's nothing in there that's like existentially threatening. In fact, it's been very, very helpful to help govern some of my emotional reactions to the pace of change as I'm seeing the types of things that are being captured in that list and how it makes my, my life easier. So that's the first thing I'd say is, um,
Start making your own list. Make it for yourself. Make it for your team. Make it for your user. What are the things that are happening in somebody's experience today where you're seeing a way that this generative technology in some form or another could make that experience more meaningful, more useful, more usable, more accessible, and so on? So that's one. And in terms of the process change, as we were talking about earlier, a lot of our
our research and our design evaluation is going to come down to taking the fundamentals of human-centered design, understanding people, understanding their journeys, and then mapping it to these velocity gains or however you want to position them. Another thing is
recognizing that AI is not going to come be our art director. We're not going to have, I mean, maybe, I don't know, some point in the future, we'll all be sipping Mai Tais on the beach. But in the short term, AI is much more likely to be positioned as an assistant. And the data is backing this up. We're not seeing the major gains in people's lives completely being offended. We're seeing AI come in in sort of predictable ways. It's like a very smart technology
personalized personal assistant or the best intern you've ever worked with. That's the type of role that AI is playing in our lives. And so allowing it to play that role, using these tools, getting familiar with them is one of the ways we can adapt our process
And then that helps us better understand how the technology works. And then we can bring that skill with us into our work as designers. So the next time you're doing an ideation exercise, just force yourself to use AI to help you come up with ideas. They might be terrible.
The goal is not to have AI produce the right idea. The goal is to understand how AI produces ideas, how those differ from human produced ideas, and where those can be useful in the system. So the more you can use the tools day to day and understand where they're helpful and where they're not, the better positioned you're going to be to think critically about how that can be adopted in a human-centered way in your products.
And then the last thing I'd say, you know, so we have, you know, thinking about the use cases and then putting those into the context of usage and thinking about where those fit in.
And then the final thing is recognizing that our job has changed a lot as designers or anybody working with technology over the last decade even. Our devices change rapidly. The context that those devices are being used change. The age groups that are using them. All of this change has happened in the last decade and at a very, very fast clip.
So we're already used to adjusting the way we think about the material that we're working with. Now we need to apply it to a new medium. And so thinking of AI as a component of the system that you're working with and not just a solution and not a threat is
is a way that we can be resilient for what's coming. And so understanding the technology of it, how quickly it's changing, interacting with the LLM models themselves, looking at the ways that it changes in different modalities, image generation, text generation, voice generation, and just
exploring it from a place of curiosity, similar to how we would explore responsive design eight or nine years ago. And how does this really affect what we're designing and our information structure on these different devices? You had to use it to get close to the constraints to really understand it as a medium. And so that's maybe the last piece. So identify use cases,
put them to work, and then explore the underlying technology to understand the constraints against those use cases and how we can be thinking critically about them in our work as designers. I think that makes so much sense, right? And everyone who's listening now should...
I think really focus on these topics at the moment. Really dive into AI, learn about the tools and try them out. And if you're not allowed to use them in your daily work because of company policies, use them in your private life, but really dive into it. And I think it's super important to do less talking about it. And I don't know, I see so many people talk about it, but I feel like they have never really used these tools and they've never really tried them out. And there's a huge difference between
I don't know, trying them once or twice or trying them on a daily basis and experimenting, trying to break them and really seeing what are the limits. Super important. And also learning how to judge the results, right? You also mentioned that this is a skill that we need to learn, like basically giving feedback all the time, giving feedback. And this is something that we need to learn and that
we need to practice, right? This is not coming very easily for us, especially if we are junior designers or even mid-level. We are not used to giving feedback all the time. The older or the more mature we are, the easier it gets. But in the beginning, we need to practice that, I feel, at least from my experience.
What do you think, what would be some tools that you would recommend to get started on, especially for beginners? So first and foremost, try out the different base models. So go to ChatGPT, go to Cloud.AI, try out LLAMA, the open source model from Meta.
You can go to the sites themselves. There are also ways to use the developer tools, which actually makes them slightly cheaper. And there's a lot of great guides online to help you do that.
The second thing is use available prompt libraries, but then create your own. So if you're a little intimidated, go in and look at the ways that people who are selling or giving away prompts, look at how they're shaping them, but then try and generate your own and don't rely on those libraries because there's two reasons for that. The first is there are pretty common strategies behind any one of those gigantic lists. They generally result in like 50 to 500
500 permutations that are all kind of the same version of about 10 different strategies. Things like providing suggested formats to the AI, giving very clear directions, giving it the context of, you know, if I wanted you to write the summary of a book, I might say, you are a book editor for the New York Times and you are about to write the summary that's going to change your career, dot, dot, dot, and then give it the instruction. Um,
The other thing, though, is and this is why I'm saying use them, because you're going to find really interesting things. There's actually some increasing awareness, these studies coming out that the machines themselves might be better at writing prompts than we are. I'll share some of these resources so that you can include them in the episode notes.
But the role of prompt engineer is going to go away as these machines get smarter at anticipating how to write prompts for themselves based on the outcome that we're describing. So.
Use the prompt library, but then go and create your own. Then go and use the different tools. So pick a use case area. So for example, if you are a copy designer or you're on the marketing team, you're going to want to use tools like Writer, tools like Copy, tools like Jasper, and explore the different ways that they help you shape prompts.
the different ways that they give you those tuners and those clues that help the user get to the better type of input faster, and the different ways that they return the results, which of them use different modalities like text, audio, video, and so on. So go and take an area and use as many of the products in those areas as you can and document what you're seeing because they do a lot of things the same, but they also do things differently. And wrapping your head around why they do those things differently is going to be critical.
So those are the big ones is use the model, put pressure on them, and then go and see them in actual real use. And then the final thing is if you're really feeling ambitious, you can build your own GPT, which is a way that OpenAI lets you actually define your own bot in a really clear way.
Think about how you would strategically train a bot to give certain types of outcomes. Think about what guidance you would give it, how you would essentially program its output using plain text to get repeatable results. Because that is the art of working with AI, is providing just enough information that you get the same good results, generally speaking, over and over and over again, and then making it easy for somebody else to put that input in who doesn't have your context.
So that would be my big set of things to do. Just do it every day. Give yourself an hour, give yourself 20 minutes and say, I'm going to go interact with these tools and get as comfortable with them as I can be. I think amazing tip. And it's also a lot of fun, right? This is not a chore. This can be a lot of fun, right? Like helping you write...
I don't know, maybe letters or things that you don't enjoy that much, where you feel like you need a little help and then you enjoy it. For me, like writing emails, writing letters, these kind of things, I really love to use any kind of large language model and see what it comes up with. And I really love the process of seeing the results, then I'm doing some tweaks, then I'm putting it back in and going back and forth.
So finding those areas where you feel like you need a little bit of support and then try things out. I think that's an amazing tip. I agree. It's fascinating. If you are curious, which I'm sure most of your listeners are, it's very fun. And you're going to see things that you can only see if you spend time with it. Some of the hallucinations are really interesting. So yes, it's fun. It just takes a little bit of time to get over that initial hump.
Yeah, good point. Yeah, be aware of hallucinations. You mentioned that earlier already, but also something to be aware of. So to be very critical of the outcome, something is just not true. So never copy and paste it and use it there. There are so many funny examples on the internet where people did that and send it in an email. Yeah, critical thinking is definitely key right now. Yeah.
So Emily, thank you so much for sharing all of these things. If people want to reach out to you or connect with you, where can they find you best?
Absolutely. So my contact information is on shapeof.ai. There you can get access to my newsletter. We have a Slack channel that's growing and some very engaged members there. And I'm hoping to see that become even more vibrant as a community as we open up things like our office hours. Those will all be posted on my LinkedIn page.
I'm Emily Campbell on LinkedIn and you can find me there. You also can find me at emilycampbell.co is my website and all the other lovely social links that I've lost track of all these platforms we're supposed to be on nowadays. You can find those all on my website and I'm always eager to reach out and chat with people, hear how you're using these technologies. Here's what questions you have or what questions you've been answering for yourself.
You know, as we both have said, this is such a team sport. It's such a huge space. It's changing so fast and just dipping out for a week. You already feel like things are just passing by you. And so, yeah, stay in touch. And I'm excited to see what other people are sharing as well.
Awesome. Thank you so much for everything that you are doing for the community. I think it's a huge value for all of us. And thank you for your time, for sharing all these insights in the podcast episode. I really appreciate that. And yeah, hope to talk to you soon. Everyone, let's check out Shapes of AI. Connect with Emily if you have questions and learn about AI together, I would say. Thank you so much for having me. Thank you. Bye.
♪♪ ♪♪ ♪♪
you