cover of episode The future of AI coaching

The future of AI coaching

2024/11/22
logo of podcast The Future of Everything

The Future of Everything

People
J
James Landay
R
Russ Altman
Topics
James Landay: 我的研究关注长期项目,例如改善教育和健康系统。在健康领域,我们开发了GPT Coach应用,利用大型语言模型和激励性访谈技术,为用户提供个性化健身计划。该应用并非直接使用现成的ChatGPT,而是进行了改进和优化,并结合了用户的历史数据。我们还与医学院的专家合作,将他们的专业知识融入到系统中。未来的AI教练将结合对话式界面和图形化界面,并根据用户的偏好调整语气和风格。在教育领域,我们开发了Smart Primer和Acorn项目,利用叙事故事、移动增强现实技术和大型语言模型,让孩子们在学习过程中获得个性化反馈,并鼓励他们更多地写作。Acorn项目中,大型语言模型自动生成个性化故事,取得了显著的学习效果,并有潜力扩展到更广泛的教育应用。我们也注重以人为本的AI设计,不仅关注用户体验,还关注AI系统对社区和社会的影响。 Russ Altman: AI可以作为教练或导师,帮助人们实现目标。大型语言模型可以鼓励学生更多地写作。现有的健身应用程序在处理用户个性化需求方面存在不足,大型语言模型可以更好地处理定性问题。在AI教练中加入社交功能需要谨慎,因为它可能适得其反。环境感知是指利用手机等设备在用户不主动查看的情况下提供信息,以提高用户参与度。将学习活动转移到户外,可以提高学生的学习兴趣和参与度。

Deep Dive

Key Insights

What is the primary focus of James Landay's work in AI?

James Landay focuses on human-centered AI, particularly in education and health, where AI serves as a coaching and tutoring tool to optimize technology for societal good.

How does James Landay's GPT Coach application work?

GPT Coach uses a large language model to simulate a personal fitness coach, processing qualitative data from users to create personalized fitness plans, incorporating techniques like motivational interviewing.

What role do health professionals play in James Landay's AI coaching systems?

Health professionals provide insights and methods for coaching, which are integrated into the AI system to ensure the coaching style aligns with effective health practices.

How does James Landay's Smart Primer project use AI in education?

Smart Primer uses narrative-driven tutoring systems with augmented reality and AI to engage children in hands-on learning, encouraging them to write more through personalized feedback from large language models.

What is the concept of ambient awareness in James Landay's work?

Ambient awareness involves using subtle displays on devices like phones to provide users with information about their progress towards goals without requiring active attention, enhancing engagement and goal adherence.

Why is human-centered AI important according to James Landay?

Human-centered AI considers not just the user but also the broader community and societal impacts, ensuring AI systems have positive effects beyond the immediate user interaction.

How does the Acorn project differ from previous Smart Primer projects?

Acorn allows for dynamic story generation based on user interactions, enabling personalized learning paths and larger learning gains compared to pre-written narratives.

What are the potential benefits of integrating social features into AI coaching systems?

Social features can provide group support and motivation, though they must be carefully designed to avoid negative interactions that could discourage users from achieving their goals.

How does James Landay envision the future interface of AI coaching systems?

Future interfaces will combine graphical elements, ambient displays, and conversational interactions, allowing users to adjust plans and receive motivation through a mix of visual and verbal cues.

What is the significance of the learning gains observed in the Acorn project?

The Acorn project achieved significant learning gains with large effect sizes, indicating the effectiveness of personalized, narrative-driven learning experiences powered by AI.

Chapters
This chapter explores the potential of AI in personalized fitness coaching. It discusses the use of large language models to address qualitative aspects of fitness, create tailored plans, and adapt to user preferences. The role of health experts and the integration of various data sources are also highlighted.
  • AI can address qualitative fitness issues that traditional apps miss.
  • LLMs can process text to understand user needs and create personalized plans.
  • Motivational interviewing techniques are used to guide the AI coaching process.
  • Future plans include incorporating visual elements and different coach personalities.

Shownotes Transcript

Translations:
中文

Hi, everyone. It's Russ Altman here from the Future of Everything. We're starting our new Q&A segment on the podcast. At the end of an episode, I'll be answering a few questions that come in from viewers and listeners like you.

If you have a question, send it our way either in writing or as a voice memo, and it may be featured in an upcoming episode. Please introduce yourself, tell us where you're from, and give us your question. You can send the questions to thefutureofeverythingatstanford.edu. The future of everything, all one word, no spaces, no caps, no nothing, at stanford.edu.

S-T-A-N-F-O-R-D dot E-D-U. Thanks very much.

The LLM allowed us to tell what they had written. And sometimes you get a kid who just writes almost nothing. And we were able to use the LLM to encourage those kids who didn't write much to actually write more. And that was one of the big outcomes is you were able to actually have kids write more. And the kids thought the writing was one of their favorite parts of this experience.

This is Stanford Engineering's The Future of Everything, and I'm your host, Russ Altman. If you're enjoying the show or if it's helped you in any way, please consider rating and reviewing it. We like to get fives, but give us what we deserve. Your input is extremely important for helping us grow the show and spreading the word.

Today, James Landay from Stanford University will tell us that AI is not just good at creating text and answering our questions, it can motivate us as a coach, and it can teach us as a tutor. It's the future of AI coaching. Before we get started, another reminder to rate and review the show. Give us a five. It'll help spread the news.

Large language models like ChatGPT and many others have started to sprout all over, and people are using them for productivity. We're writing letters, we're editing our text, we're answering questions. There's a lot of stuff people are doing. But what we don't always think about is can we use this AI as part of a coach or a tutor to help us get from where we are to where we want to be?

Well, James Landay is a professor of computer science at Stanford University, and he's a co-director at the Stanford Institute for Human-Centered AI. He will tell us that he's built a prototype health fitness coach that uses AI. He's also created a tutoring system for elementary school students where they learn about the planets, about global warming, about fires, all through a mixture of large language models, getting outside and interacting.

James, you were on the show in 2019, May of 2019. It was great. But now it seems like you're working a lot on using AI for teaching, for coaching. How has your work evolved in the last few years and what are you really excited about? Well, a lot of stuff has happened in the world since 2019, Russ, as you may recall.

But most of my projects I see as more of 10 or 20 year efforts. They're big problems that if you solve them can have a big impact on the world. So problems like how do we improve the education system or how do we improve people's health to me are not things we solve in three years. They're things we work on for a long time. And then along the way, we come up with new ideas that might improve it. So, for example, in health, when I spoke to you before, we talked about work on how you can give people better awareness.

Awareness of what they're doing towards our fitness goals by having what we call ambient display So for example story or images on the lock screen of their phone what we've been working on since then is how can we give people more of the sense of having a personal coach and

Even if they can't afford to have a personal coach. So, you know, just like personal tutoring, personal coaching is highly effective for individuals who are trying to change their health or fitness. But many of us can't afford to have a personal coach because it's quite expensive and there's not enough coaches out there to handle all of us. So,

Combined with what's going on in AI, LLMs, for example, we're able to use that. Large language models. Yeah, large language models, or as we like to say here at Stanford, foundation models, because we're going to build other applications on top of those foundations.

We can use that type of model to get at people's real needs for coaching. So you can think of all these fitness apps that you might have to have out there, whether it's Apple's fitness app or Android from fitness app from Google, Google Fit. They tend to be very quantitative in form. So you maybe you have an app where you can put some goals in quantitatively.

But it's the qualitative issues that often trip people up. You know, the app's telling me, hey, you know, go running every morning. But, hey, I need to drop my kid off at preschool and I can't fit that into my schedule. You know, their user interfaces. I'm sorry to interrupt. The user interfaces are terrible. I have a Garmin watch and sometimes I do a five-mile run and at the end it says unproductive workout. That's all it says.

And I just want to Frisbee it into a brick wall. So you've already gotten me, but please continue. So what's nice about these large language models is they're really good at processing text, right? And a lot of the ways we might express some of these qualitative issues of what's worked for us in the last years when we've done our fitness activities or where we might have barriers, these models are actually good at

Be able to process that kind of information if we have a conversation with it like you might have with a coach and then help you develop a plan for your fitness that accounts for those issues. So we've built an application called GPT Coach where we used –

a large language model and you know it's much harder said than done to actually get these things to do what you want but essentially we can create a bunch of what we call agents that use different pieces of the model and this way we can check what kind of things we're telling you and we can keep the model on track because we use a technique called motivational interviewing which is

popularized by researchers here in the School of Medicine at Stanford for coaching. And we can have this coach pretty much use that style of interviewing to

to find out what's worked for you in the past, what your barriers are, and then together come up with a fitness plan. And we tested this in the lab with people, and they were blown away by how good it was. And in fact, when I first tested it, I said, this could be a product right now. That's how well I thought it worked. Okay, so much exciting things there. So first of all, I take it this is not out-of-the-box chat GPT.

Not out of the box. We have to essentially break it up into a bunch of different agents who have different tasks that they're trying to do in terms of this interviewing and also keep it on track and check where we are in the process, what kind of information we've gotten. And we also get information out of your, you know, you have a Garmin watch. A lot of people have an Apple watch or just an iPhone. We're able to actually take three months of their prior data and

and use that to also drive the conversation, understanding what they've been doing. So another really intriguing thing you made a quick reference to is that you've been kind of working, it sounds like, with the School of Medicine experts at fitness or health. Tell me about that because now you're trying to get this large language model to do things at a very kind of virtuoso level. So how...

What is the role of the health professionals and how do you get their knowledge into this model? Yeah, I mean, most of my projects tend to be interdisciplinary, working with folks outside of computer science. So sometimes even in computer science in a different subfield. So, you know, as you mentioned, the last time I was on the show, we talked about buildings and smart buildings. And I work with people in civil engineering and education and sustainability on that as well as health.

One of the other projects I've had going for a while that I talked about last time was these ambient displays for health and fitness. Now, as part of that, we collaborate quite closely with people in public health in the School of Medicine. And, you know, that team has a lot of experience on training coaches and what are the best methods for coaches. So in this case, they actually created a manual.

And so we were able to take that manual and understand how it's structured and information and actually feed that in to the large language model as well to help our system kind of do it in that style. And we also consult with them on this. They haven't been as much of a close collaborator like on this project, but we've consulted back and forth with them and they may become closer as we get to the next stage because we

All we've done so far is be able to replicate that first 30 minutes or 60 minute meeting you might have with a coach or meeting them for the first time. And they're trying to understand what your goals are and what your barriers and what's worked and come up with a plan. But the next step of the project is, OK, now what does it mean for me to have a coach on my phone working?

over the next six months as I'm, you know, partaking in my exercise program. And so that interface is a harder design because this first one really is a conversational interface. It's like we're having a meeting. But now I'm going to have this application running on my phone and we don't think

And language is the interface that you always want to use, where you want to have graphical elements that you would see. And there we learned from some of that prior research I've done with an ambient display that's showing some kind of visual story as you go. But now you might want to use language, for example, when you might want to say,

it might remind you, hey, Russ, you committed that you were going to go running this morning. And you might want to say, oh, but I have a big meeting with James Landay and he's really important. I can't miss it. Really important. Right. And maybe or you're sick. And so you might use that. And then the coach might be able to kind of adjust your plan. So changing your plan or having exceptions or being able to, you know, highlight when you're not feeling up to it. And maybe it's going to, you know,

work on some other way to motivate you or give you an easier goal that might help you get there. So we'll have a kind of combination interface now with traditional graphical interface and ambient display, but also with the part where you might then still converse when you want to. So that's

That's what we're designing right now. And the plan then is to do a short-term study of that, like three weeks, just to get the bugs out. But then the true goal in all this kind of work is, can we run a study that might run over three, four, six months and show that people are doing better at hitting their goals

or changing their behavior than they would otherwise without control. Yeah. So this sounds very exciting. And by the way, sign me up for that. So, but here's a question. When you have a coach, I know some people who've had coaches and I know there's a process of interviewing them because you're always trying to find the coach

who's like perspective on life and whose perspective on health and wellness kind of matches yours. You know, some people like the drill sergeant, you know, we've all seen these ridiculous videos on YouTube where they're like, go, you can do it. You know, work harder, work harder, work until you drop. And then there's other people who are like much more like, let's do whatever you want to do. Do you imagine that you're going to have

Or do you think the LLMs already can modulate their tone based on the preferences of your users? Yeah, that's a really great question. So in fact, as we're designing the visual version of this,

we see a kind of avatar-like character that is representing the coach. And so we are doing an online study to just test these different designs for those coaches and also the personalities. And that's what we want to discover, whether different people have a different preference for types. And we're trying to design it in a way that we could probably flip in different personalities and such.

Now, my caution to my grad students is we probably don't want to do that in the first version of this because then that just makes the study results harder to interpret. Right. Good point. Was it because you had the different coach? But I think for a product, you're probably going to want.

Both different visual themes, like, you know, we have a space theme and a beach theme, you know, different people, you know, maybe you want to make a whole theme. Dude, we have to exercise, dude. Right. So there's different themes in the visuals, but there might be different themes on some people want a coach that's really tough and pushes them. Like I had a coach once who could make me cry, literally. Yeah.

Like, cause he could push me beyond where I could push myself. And some people are okay with that, but other people would quit if they had that. So understanding that is something that we're looking at, but we probably won't push it into the first version because again, it just adds too many confounds for study. But I think for a further long-term thing, yes, you'll probably have different personalities and yes, you'll,

In terms of the LLM, the LLM, you can do that, but we also see the need for it in the visuals as well as the personality of the avatar. So we're looking at that in all aspects. We probably won't see that in the first version of this.

And I'm very aware that I'm giving you all of these features and that you need to walk before you can run, but that won't stop me. My next question is, what about group activity? So a lot of people get a lot of their sustenance and kind of support in there, especially in health, like they're part of a running group or they're part of a rowing or a workout group. Do you have a, maybe not today, but is in your vision some sort of social support from other humans or?

I mean, it's not in the initial version. I've worked in this area for a long time, and I have to tell you, people think, oh, yeah, social, you got to add social. But we did some of the early studies on this, and we found that social can actually also backfire and make it worse. So I remember this when I was in Seattle at the University of Washington and running a research lab for an Intel. We'd done one of our first studies like this, and the people were in a group,

And literally one woman drove by one of the women who was walking in Seattle up a hill. And the lady said, why are you walking up the hill? You can get credit just for walking the flat. So, you know, the group even was discouraging her from doing the exercise. So it is something you have to be very careful about how you design it because it can also cause the opposite result. So we're not looking at a group right now, but it is something obviously for, again, if it was a real product, you might think about where might you integrate that. Yeah.

And my next question is a little bit about a definition. Even in this conversation and definitely in your writings, you've talked about this idea of ambient awareness. And I just want, could you define that for me and let me know why it's an important thing because that's not the kind of thing I usually attribute to a computer or even to an LLM. Like usually my LLM is in a little white box on my computer. It has no idea if I'm in a rainstorm or at a spa. So what is ambient awareness to you? So,

So when I think about what is an ambient awareness or ambient display, it really comes out of this idea that a lot of times we're attending to something else in the real world. We're not like staring at our phone doing something on it. Right. And how do we take advantage of those glances at our phone to actually communicate information?

information to us in the background, even if it's not the primary task. So a lot of these fitness apps and things like that, for you to know what's going on there, you need to turn on the app and look at it and go, oh, yeah, I've walked this much today. I've run this much. But only if you go check it, are you going to be aware of what's going on? And so our research is based on this idea that people who are really good at sticking to their goals are

are people who are just much more aware of what they've done. They're tracking it more. Either they're actively checking or they're just aware. Oh, I know I parked my car over there. I've walked this far. But those of us who are less aware have a harder time

you know, understanding, are we doing well today? Do we need to do more? And so the idea of ambient awareness is, can we have a display that you might just glance at and see that gives you a sense of how you're doing? So we take advantage of the lock screen of the phone.

or the wallpaper when you unlock as a way of just seeing some kind of display that gives you a sense. So the one example I think you see out there today is on an Apple Watch, you might see those rings if you use that display. I think they got that probably from our research because we've been working on this for years. But it's a little too subtle. Most people are not even aware, well, what do those rings mean? And is that good or bad? It's kind of small on most people's watches.

And so for us, it would have to be something you really see. And by just a glance, you have a sense of, hey, I'm doing pretty well today or I'm doing well this week or no, I should bring my gym bag because I really need to do more. And so we want to take advantage of those glances, even if you're not running the app explicitly to get a sense of how well you're doing.

No, I love that idea because in addition to my phone and my watch, I have like literally right now I have two monitors in front of me. There's a lot of real estate not being used. A little corner telling me, you know, you're not acting healthy or whatever. That could be a really important thing. This is the Future of Everything with Russ Altman. More with James Landay next.

Welcome back to the Future of Everything. I'm Russ Altman, and I'm speaking with James Landay from Stanford University. In the last segment, we talked about James's work building health fitness coaches, and they seem to work and they look very promising. But he's taken some of the same ideas now and is using it on elementary school children to help them have a more rich learning experience. It involves getting them outside, interacting with the world, and

using large language models, which he will tell us gets them to write more. Maybe large language models are not the end of writing as we know it.

But I know, James, that you're also looking at education and specifically elementary school education, which is arguably the most critical in the same way that pediatric medicine. That's the future. So tell me what's happening in AI for elementary students. Yeah. So, again, one of my long term projects is this project called the Smart Primer. And the whole idea there is can we use narrative stories?

with activities embedded in the narrative as a way to get kids engaged in their education. And the high level motivation of this is that, you know, many of us do well in the school system. It's kind of a factory school system. And anyone who's here at Stanford probably did well in that system. But there's probably a lot of talent out there.

that just never fit in that and didn't get really motivated and excited by school. And then, you know, in some ways they don't meet their potential in society and probably end up in careers that are less satisfying and less economically productive. And so one of my goals was, is there a way to motivate kids to,

outside of traditional school to learn, and maybe that will carry over into their other educational outcomes over their life. So the Smart Primer is a series of projects trying to explore the use of narrative in a personalized tutor.

So we've built a variety of these over time. But now that the AI part of this is starting to work better by having these foundation models and LLMs, we can do more. So a couple of summers ago, we built this application we call Moon Story. It runs on a smartphone. And as part of this, kids learn about the environment. They learn about the

the planets and the sun and the scale differences between the planets and each other and the sun. And they learn about the scale difference of the distances between the planets and the sun. So in fact,

To use this, they do mobile AR on the phone. And we have these kids doing it. You said something mobile AR? Yeah. Define what that is. Mobile augmented reality. So not having to wear some goggles, but instead on your phone, we can see through the camera and see objects in the physical world, but we can overlay data on top of it.

So, for example, we had kids come over here to Stanford. We have on our science and engineering quad these huge, like I call them big marbles. It's a big artistic instrument.

There's something like 10 or 12 of these huge marbles. I love those things. They look like planets. They really do look like planets. So I had this idea of, oh, could we map those to the planets and the distance between a subset of them to kind of scale down distance the planets? And what do you know?

We were able to put the sun at one end of that quad and go through the inner planets pretty accurately of being the right distance between them. And so the kids see the sun. And then as they go to these planets, they actually can see the scale of the real planets relative to the sun. And the distances they walk, they learn are kind of the relative distances.

And then they get through all the inner planets. And then we get them go from Mars to Jupiter. So Jupiter is the first outer planet. They have to walk all the way from the science and engineering quad to Stanford's Memorial Church, which is maybe a half mile or a third mile away. And that shows that's how far from those inner planets that you were just going like, you know, 20 meters to now, you know, you're going something like 400 or 500 meters to get to that one. And if you want to go to Saturn, by the way, it's off Earth.

past that other end of the campus.

That whole story about the planets was also put into a story based on an ancient Chinese story about the moon goddess and her husband, who was an archer who had to shoot down these seven orbs that were making the earth too hot. And so we have this story that they learn about global warming as well as the planets, and it's all embedded in the story. And then as part of this, the

The key thing that LLM let us do is they had to write about what they learned and what they were thinking about changing in their everyday life is about sustainability. The LLM allowed us to tell what they had written. And sometimes you get a kid who just writes almost nothing. And we were able to use the LLM to encourage those kids who didn't write much to

to actually write more and that was one of the big outcomes is you were able to actually have kids write more and the kids thought the writing was one of their favorite parts of this experience because they got feedback from the moon goddess on what they wrote so you had the characters from the story embodied within the llm right and it was and it was all personalized to what you wrote

And that was the only place we used the LLM in this. And we got learning gains that we test by doing a pre-test and a post-test and a test a few weeks later. But the big surprise to me, you know, there are a lot of other results, but the big one was, hey, we got kids to write more. And that's really hard.

Especially since people are saying LLMs are going to be the downfall of writing by humans. And you have a counterexample there. Just to understand this a little bit more, is it that they're interacting with the LLM and it's prompting them like, hey, what did you do today? What did you think about that planet thing? Like how does the LLM get them to write more? So in this case, the LLM asked them some specific questions about what things they might change in their lives later.

with respect to sustainability and about what they had learned. And then if they didn't really write much, it kind of encouraged them to write more, ask follow-up questions. And even if they had written something, it could then respond relative to what they had written and also encourage other response. So it's like you really had a person who read what you wrote and gave you feedback that was really relevant to what you wrote rather than a canned

response that a computer program might do in the past. The other thing that you didn't stress, but that I have to note is that this was presumably outside, the kids were moving, the kids were not in a classroom. And this really does, you kind of delivered on the, your introduction was that some kids don't, the classroom environment is not where they excel. And you could imagine that by putting them outside, putting them in space, having them move, that this created a whole different set of

skills and interests and it just seems like you delivered on that promise. Yeah. So one big idea of this, and this was also, you know, when I got the original idea comes from really the science fiction author, Neil Stevenson's 1995 novel, the diamond age. Like I give Neil Stevenson full credit because this is where this idea came from. And I've been thinking about it since 1995, though not really seriously till 2010 when the iPad came out and I thought, Oh, that's the device he was describing. And,

But the idea was kids today, they're sitting inside on a screen. They're not outside playing, you know, stickball for you New Yorkers, baseball for us, skateboarding, things in the outside like we did when we were growing up. And parents lament that their kids are just inside. So as part of this,

I didn't want to just create another thing that was just going to force you to be inside. Obviously, there's parts of it that you might do inside, like reading a book. But there's other parts where we wanted you to go out and do an activity in the real world, whether it's in your backyard, on your block, or maybe with your parents down on a trail. So we've done a previous one, actually, where you look at eucalyptus trees.

And you take a picture of one, you smell the leaf, and you actually learn about the fires with eucalyptus in the Oakland Hills. And you learn about kind of the controversy. You know, they're not native. Should we get rid of them or not? And you have to kind of debate. So that was part of that story. But since then, we've doubled down on what we could do with the LLM. So this summer, we built a new system.

Now, in this system, which we call Acorn, again, it has an environmental theme. Kids learn about trees, again, that are local here to the Bay Area because that's what we're using, but this kind of tree we could find in other places. So they learn about the California oak. So they learn about oak trees, and they learn about the ecosystem of the different animals that use the acorns as well as live in the trees. And there's these other characters that are these animals that are going to teach them these things.

Now, what's different about this one is in the previous one, we had to design the whole story and write the whole story. And we worked with authors to help us. And it's all kind of written down in the code. In this one, we simply had to define the characters, define the outline of the story of what would happen, and then make some constraints on certain things you would have had to accomplish or learn before you could move to a certain part of it.

Then from that, the LLM generated the whole story on the fly for each kid. And the kid could take a different path through the story depending on what they wanted to learn. So there's two big things. And again, we have outside mobile augmented reality as part of this. What was really interesting in this one is one, writing those stories and getting it done well is hard. Two, this allowed that to be done automatically and personalized to how the kid did it.

And what we found is we had learning gains bigger than any study I've ever done on education with large effect sizes, which is hard to get. Very hard. So this is working.

So it was working. Again, these are small, hey, you did this thing for an hour. It's not like a whole curriculum. Now, there's real potential in this last one. Because we wrote it in that way, we think we can now build a tool on top of that toolkit that would allow educators or curriculum specialists to essentially come up with the curriculum and only have to outline what they want.

And the learning goals and how to tie them together. And then the system could generate it. So we really have a chance of scaling this up in a way that we didn't before. So that's the next step is to build that kind of tool on top of it. Incredibly exciting. So listen, in the last minute or two, I want to ask you about a little bit something different, which is that you're a leader of an institution at Stanford devoted to human-centered AI. Yes.

And I wanted to ask you, sounds good to me. What is human-centered AI and why is it different? Is our current AI not human-centered? Yeah, I would say.

Current AI in general is not human-centered unless you try to think about it. And we started the Institute a little over five years ago now with this idea of human-centered AI. But after a couple of years of that, I got a little dissatisfied and felt, oh, that was just kind of an empty promise. We were just saying human-centered without trying to define what did it mean to make something human-centered.

And so what's interesting about AI systems is they have more chance of having what I would call side effects on other parts of your community or society. So this can happen in traditional software, but in AI, it's much more common. For example, what is the impact of

your system on the people who label your training data in Africa? And if you don't pay them well, or if you cut them off, there's an impact there. Or what's the impact in this medical system where you're not the user of it, but your doctor maybe uses it and decides that you're not going to get some life-saving care? You're impacted even though you're not a user. So much of how we think about designing software systems is about what we call user-centered design. Let's involve the users and make sure their needs are there.

But in these AI systems, there's more and more of the case where the user is not the same person as those who are impacted.

So what I've advocated, it's human centered. AI means is we need to still do user centered design, but we need to go beyond that and also do community centered design to get the community that might surround it that are impacted. Let's say somebody is affected by a criminal judgment of whether they should get prison or or, you know, home release or whether they what how much bail they should have that.

effects, not just the judge who's using it. And then finally, if an AI system becomes ubiquitous, think about the ubiquity of our social media applications and what kind of information we see there, you can start to have societal level effects. So human-centered AI means we need to design at the user level, but also community level and society level, and think about all those together when we're designing AI systems if we want them to have a positive impact.

That is fantastic and it sounds like you are walking the walk through the projects that you've just been telling us about. So that really is a great vision.

Thanks to James Landay. That was the future of AI coaching. You've been listening to the future of everything. You know, we have more than 250 episodes in our back catalog. So you can listen at a moment's notice to the future of many things. Also, please remember to hit the follow icon on whatever app you're listening to, to make sure you're always alerted to our new episodes.

You can find me on a lot of social media like Blue Sky, Mastodon, Threads, @RBAltman or @RussBAltman. And you can also find me on LinkedIn, Russ Altman, where I announce all of the new episodes. And you can also follow Stanford Engineering @stanfordeng.

If you'd like to ask a question about this episode or a previous episode, please email us a written question or a voice memo question. We might feature it in a future episode. You can send it to thefutureofeverything at stanford.edu. All one word, the future of everything. No spaces, no underscores, no dashes. The future of everything at stanford.edu. Thanks again for tuning in. We hope you're enjoying the podcast.