cover of episode Why Fei-Fei Li is Still Hopeful About AI (… and Elon)

Why Fei-Fei Li is Still Hopeful About AI (… and Elon)

2023/10/16
logo of podcast On with Kara Swisher

On with Kara Swisher

Chapters

Dr. Fei-Fei Li discusses the potential catastrophic risks of AI, including disinformation, job losses, and privacy issues, emphasizing the need for humane development and diversity in the field.

Shownotes Transcript

On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org.com.

On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org slash bots. It's on!

Hi, everyone. From New York Magazine and the Vox Media Podcast Network, this is On with Kara Swisher, and I'm Kara Swisher. And I'm Naima Raza. Today, we're talking about artificial intelligence, and our guest is Fei-Fei Li, the famed Stanford AI professor and co-director of the Human-Centered AI Lab. She's also the author of a new book called The Worlds I See, which is

Kind of part personal memoir and part AI history, and it'll be out in November. Yeah, she's one of the top people in AI and one of the earliest to work on it. She worked at Google, she worked at Stanford. She has a huge reputation in the sector and one of the few women in it, actually, at the very top. AI has been a cornerstone of our coverage on this podcast. We've had on Sam Altman from OpenAI, Reid Hoffman and Mustafa Silliman from Inflection AI, Yusuf Mehdi from Microsoft,

And also Tristan Harris, who's not on the business side anymore, an ex-Googler who's now turned into a researcher who's kind of ringing the bell on AI. How would you stack rank them from bullishness to bearishness? I think they just have different opinions. I think most of them are...

Obviously, Tristan is on the more worried side and extremely worried, and others are more positive. I'd say Sam is probably the most positive of those. But they're all aware of the problems, and I think probably Dr. Fei-Fei Li is in the middle, and I think that's the place to be.

She also has been a contributor early to this technology. She created ImageNet, which is a data set that played a groundbreaking role in AI's development. That's right. She was trying to recognize photos and trying to figure out how photos are recognized by AI. It was one of the very early times because it was, you know, people can notice, even a three-year-old knows what a cat is. And it took a while to train a computer to understand that.

But in some ways, she bears some similarities to Jeffrey Hinton. They both worked at Google. They overlapped early in their career. Jeffrey Hinton, of course, is the AI scientist who's

raised huge alarm about AI earlier this year, stepped down from Google, and is very bearish on the technology. And she is not quite there. Well, it's not the same kind of bears as Jeffrey is doing more of the end of humanity, as many people do. It's sort of doom-scrolling idea. And she is more, we're not focusing on the real problems, which are things like justice, you know, not applied correctly using computer technology.

and machine learning and things that will affect more people. I don't think she thinks the end of the world is nigh, but it is for a lot of people if AI goes the wrong way in all kinds of small areas. And I think she's right. Yeah, she has a humanistic vision, aspiration for artificial intelligence and that can have great outcomes in things like medicine. She holds space for all of that.

Since our last AI interview, which is actually in June, so it's been a while since we've covered this. Since then, the EU is in the late stages of their AI regulation, the AI Act, which will be the first major regulation of generative AI.

In the U.S., there's a lot of screaming and hand-waving, but baby steps in terms of actual changes. Gavin Newsom signed an executive order in AI in California. The White House has gotten AI companies and researchers together. And then there was just this Politico article, which examined kind of a billionaire-backed nexus of

political influence in Washington. Yeah. That one's more in the end of humanity group. It's backed by Dustin Moskovich and his wife, Carrie Tuna. He was a Facebook executive. He made all his money there and they do a lot of founder. Yeah. And so it's, you know, they're trying very much to get people worried about things and they have undue influence because they, they put fellows at a lot of the offices. So they're going to have an influence. That was the tenor of the article. You know, I just think who's, who's, who,

these legislators are hearing from matters a great deal and what they should do is cast a very wide net. They tend to cast a very, not such a wide net and talk to the companies more than they talk to all kinds of people affected. And so, but, you know, Dr. Lee has been at the White House and there might be an executive order on this issue. Certainly, you know, waiting for Congress to act is going to be more difficult. The Politico article you shared, it was interesting. There were kind of two major concerns in it. One was that the potential bias that comes from having Moskowitz involved

these Horizon Fellows that are all kinds of departments and regulatory arms of government. And Moskowitz, of course, is tied up in OpenAI and Anthropic. So one concern was, hey, there might be some unfair bias towards certain companies. And the other concern was that this is distracting, this focus on artificial intelligence is

given the media attention, given the money around it, is distracting from more pressing tech regulation, things like antitrust or social media algorithms. I guess they can do all of it at once. You think Washington can do all of it? Well, they should. That's their job. You're always saying they have done nothing at all, and you think they could do everything all at once. They have done nothing, but I don't think, I think they can. I just think they aren't. That's different. I think this is not, these are all linked together. It's systemic.

and they need to just get together and figure out a whole range of things. And, you know, you don't want to do this by executive order. Obviously, this has to be congressional. It has to be pushed through to the various departments. I mean, every single department has to be part of this. But

Her real worry is the concentration of power. In this book, she articulates the worry around the concentration of power in the hands of the private sector versus public sector, universities, etc. And she also articulates a worry about the lack of diversity. Because she is, we just named a bunch of people we've interviewed on AI, and she is the first woman we've spoken to. Yeah, it's the same problem in all of tech, right?

So I don't know what to say. It's just the same thing. And this is really important. She's obviously a big, a big name and she's someone who's been very influential, but it's dominated by a certain type of person. Homogeneity is a real problem in tech. And she's very different to that. She was born in China. She immigrated to the United States in her teens and she,

And she went to Princeton undergrad, but while most kids were at dining clubs, she was helping her parents at their dry cleaner. She would go back home on the weekends and work in the dry cleaning business that they ran. And so she had a very different lived experience and was a voice that we were really excited to hear from. Yeah, I'm surprised I haven't interviewed her over the years. I sort of kick myself because I think she's really someone I have admired for a long time. I went to hear her speak a couple times when I was at grad school, and she's fantastic. Mm-hmm.

Yep. Anyways, let's take a quick break and we'll be back with Dr. Fei-Fei Li. This episode is brought to you by Shopify.

Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech. It's all right.

Thank you for coming. I want to start by contextualizing the moment we're in with AI, which you've been writing about for a long time and working on for a long time. It's a field that's been developing for decades, which people do not realize. It's just speeded up and had more attention lately. Talk about what the elements of it right now, the landscape we're in at this moment. Yeah, first of all, Cara, very happy to be here. It's quite an honor.

Even for someone who's been working in this field for decades, this feels like an inflection moment. And I think part of this inflection is really a convergence of public awakening, policy awakening, and the power of the technology and the business that is impacting. From Hollywood to sci-fi to... It's been in the...

in the ether of public consciousness. But I think this time is different, it's real. When anyone, when, you know, young kids to elderlies can just type in a window and have a

legitimate conversation with a piece of machine and that conversation can be almost any topic some topics can be deeper or deep right like um details of biology or chemistry or or you know geopolitics or whatever um

It really is the entire world recognizing we passed the Turing test. And explain what that is for people who don't know. Of course. A Turing test was proposed by computer scientist Alan Turing, where he uses that as a test to symbolize that computers can think like humans.

to the extent it can also make you believe it is a human behind the curtain if you don't know it's a piece of machine. But interestingly, in 2018, you wrote, I worry, however, the enthusiasm for AI is preventing us from reckoning with its looming effects on society. You're one of the first. I paid attention when you wrote this piece. Despite its name, there's nothing artificial about this technology. It was made by humans intended to behave like humans and affect humans. So we want to play a positive role in tomorrow's world. It must be guided by human concerns.

And you called it human-centered AI. Why did you want to call it human-centered AI? I still feel this is going to be a continued uphill battle, but it's a battle worth fighting. Like I said, at the center of every technology, its creation to its application is people. And because of the name, artificial intelligence, by the way, it's better than its original name, which would be cybernetics.

Cybernetics, that's scary. Cyberdyne, Terminator, right? That was the company. But artificial intelligence really as a term, it gives you a sense of artificialness. So I wanted to actually explicitly call out the humanness of this technology, especially given...

It's touching on intelligence. It's going to be our collaborator at the most intelligent level. And we cannot afford to lose sight of its impact. That's why I want to call it human-centered. Right. One of the things I do a lot is, and I wish, if you could say for the record, it is not sentient. It is not. I get tired of that. No, it's not sentient. So the idea of us, that it is us and it is not, it is not...

Self-aware is different than sentient, that it's a human. Explain that for average people, because they do begin to think that these are

They're not feeding back things about us that we put in. Yeah, first of all, Cara, I think you're right. I think it's our responsibility at this moment of the technology not to go hyperbolic about it being sentient. There's no evidence of it being sentient. If we somehow vaguely define sentientness with awareness and intention and all that,

Of course, this is pushing philosophy. Even humans in our world don't have a precise definition of consciousness and sentient. But what this technology right now is, is it's ingesting a huge amount of data that is mostly human-generated, especially in the form of internet language, as well as other digital books and journals.

And then it has an ability to learn the patterns of this language and what we call the sequential model to predict when you say the word Kara. It's very likely to predict the next word is Swisher. So it has that capability. I do think some of the thinkers of our time are starting to realize

see the power that is worth mentioning, which is that because it's machines ingesting data, it has far more capability than a human.

Yes, the bigger brain. Yes, it is the bigger brain. And when you have the bigger brain and faster chips to compute in the bigger brain, you are able to predict patterns in a more powerful way. This is why your bigger brain can say chemistry, can say history, can do a lot of things. Right.

And that is scary and it's legitimately has its consequences. Certainly, because it can then start to make patterns, as you said, which is the thing you started with, with ImageNet, the idea of patterns. Talk about ImageNet and what it was and why you went into it. Yeah, well, ImageNet was a project that I began working on with my students back in 2007. It became public in 2009 and the ImageNet moment happened

that the AI history book tend to write about is 2012. So it's really five years after the onset of the project. Because in 2012, ImageNet Challenge, which is an international competition of AI algorithms to recognize pictures of objects,

was won by a group of Canadian computer scientists led by Professor Jeff Hinton. And that was the first time neural network algorithm demonstrated to the world how powerful it is when there is big change.

and programmed on two GPUs. So that was the moment people call the ImageNet, AlexNet moment or the beginning of deep learning. ImageNet just told you a cat was a cat, a dog was a dog, and it was difficult. You had to tag these. People had to tag them and you had to make sure people were being honest.

about tagging them, which also brought in biases of people, which we're going to get to later. But that's what it essentially did, is it just said, cat, cat, cat. People would say that's what it was. Right. I think the meaning of ImageNet is really, it symbolizes now, in hindsight, the beginning of big data era. Because before ImageNet, AI research was very logic-based. It was very...

This is one jargon word, Bayesian net. You know, it was intricate math, but it wasn't really working at any scale. And it definitely was not working at human scale nor internet scale. Yeah, a child could get images faster than a computer. Exactly, an image that was really a shifting mindset and say, okay,

okay, look, we're doing this wrong in the past decades. Let's actually bring data into AI. And I got to say, Cara, even as the inventor of ImageNet, when ChatGPT transformer technology made the next inflection point last year, 2022, I had to take a deep breath and reflect on

My God, big data is still playing a role that even went beyond my dream back then. Yeah, absolutely. But let's talk about Jeffrey Hinton. You have similar roles in moving AI forward, though you have different contributions and timelines.

Were you surprised when he came forward after he left Google to ring the alarm bell on AI? And do you fear his fears were overblown? So I don't know if you're aware of this. I was just at my first public talk with Jeff last week. Yes. We've been friends for more than 20 years since I was a graduate student, but it was wonderful to see Jeff and be on stage with him. First of all,

I really want to give credit to Jeff because he is not only known as the most accomplished AI research scientist in the world, but he's always intellectually so honest and just really that kind of scientist who says what he believes. And

When he sees the power, my understanding is when he sees the power of the transformer language models, he is concerned about human consequences. Of course, he has his own view about whether this is closer to sentient. And I respect his view, even though I respectfully disagree, because like we said at the beginning,

The definition of consciousness and sentient being is vague. And I happen to have the benefit of half of my PhD was with

Dr. Christoph Koch, who is still to this day a forefront researcher in the research of consciousness. So I learned, I didn't learn that much because I was focusing on AI, but the things I've learned from Christoph is that this is a messy word, this is messy definition. So...

Right. And his focus was on things like killer robots. You have focused more on the small things. And I think you notice a gender difference. Some of the leading people, they are men and it's all killer robots and the bigger end of civilization, Elon Musk, end of civilization. But you think the smaller things are much more important and impactful to humans. Well, okay. So I...

I actually think there are some immediate catastrophic risks. Okay, catastrophic. All right, go ahead. Yes, but it's not existential in the sense that terminators are coming to kill all of us or machine overload. Right, that's next week. Yeah, that's next week. Catastrophic in the sense of disinformation for democracy, in the sense of if we don't have good economic policy, that the job changes will impact

certain groups of people more than others that might even lead to unrest in our society. Catastrophic in the sense of polarization. And you and I know this very well, whether we're talking about gender or other, you know, the races and catastrophic in the sense of, you know, bias and privacy and all that. So yes, I do believe there are

risks. Right. When you look at some of the risks where they're, and Hinton and others have talked about it, this idea of terminator, that's what they do paint out. I've been at dinner parties where it's always men come up with this scheme and it's always women who say, actually, it's the justice system. Actually, it's this. It's actually this. Talk a little bit about that.

Because it's harder to raise an alarm that you're raising. I think you're raising an alarm about the technology when it's smaller, even though it's just as devastating in some ways. Yeah, well, Carol, we've both lived this life for many decades now. So I wouldn't call this smaller, honestly. I just... Yes, I agree.

I mean, in people's minds, it's easier to think Terminator than it is someone's going to go to jail that shouldn't go to jail, for example. Yes, and that's why I'm calling out for human-centeredness, because if we put human individual well-being as well as community well-being and dignity at the center of this, suddenly it's not smaller. I don't want to diminish that.

Down the road, the technology has even bigger impact. But I think the immediate things are really immediately important. I do think, though, Cara, I don't know if you're noticing this, I think policymakers...

are starting to pay attention? Yes, we're going to get to that in a minute. We are, they are. But do you yourself feel some responsibility? You know, Hinton has said that, and of course, you've probably seen Oppenheimer. He's like, look what I made. How do you, because you were one of the early people around this. Do you feel responsibility? How does that manifest itself? It manifests from returning from Google to Stanford to start this Human-Centered AI Institute. I

As you know, I can stay in big tech and probably have more personal gain. But it was a very conscious, soul-searching result decision in 2018 that when I see the potential human impact, I feel that

Our generation, my generation who made this technology, who unleashed this technology has a part of responsibility and also probably even a leading responsibility in calling out the human impact. So this is why I started Stanford HAI and has been in the eyes of the public

and policy world calling out these important measures. Okay. Let's talk about the impact then. The doom scrolling scenario that I just referenced, we've heard a lot about the fears from jobs to misinformation, existential threat to humanity. What are you most concerned with of the immediate ones? One of my current biggest concern is the

extreme imbalance asymmetry of lack of public sector investment in this technology so i i don't know if you have heard me saying not a single america university today can train a chat gpt model

I actually wonder if you combine all the compute resources of all universities in America today, can we train a chat GPT model? Because... Which where it used to be. This is where it used to be. Exactly. When I was a graduate student, I never, you know, drooled over going to a company to do my work.

But I think there should be healthy exchange between academia and industry. But right now, the asymmetry is so bad. So now you might ask, so what? Well, so we're going to have a harder time to cure cancer.

We're going to have a harder time to understand climate changes. We're going to have a harder time to forecast the societal impact, whether it's economics or law or gender, race, political situations. All this is happening in think tanks like public sector, universities, and nonprofits. If the resource is really diminished...

we're also going to have a harder time to assess what's going on. It's so interesting. I had a conversation with my son who's at University of Michigan last night, and he's studying this. He's in computer science, but he added philosophy to his major, which he thought was critical. But one of the things he said to me, he goes, Mom, what's AI but optimization and efficiency via automation and a way to leverage human goals? It's essentially a more efficient way

And he said, so it's just a spin scooter over walking. And what he was saying is it's all being applied to stupid things versus bigger things. You know what I mean? Like, so if, if a private company is doing it, it will not do the larger concerns. You've got a smart kid. He is, he goes, he goes, it helps you get places faster, but it's an expression of lazy capitalism. It's an expression of lazy capitalism. And I was like, huh, well, I, I'm, I feel good about my money I'm spending there at college. Well, I, I,

I think there are legitimate commercial use of AI. So whether it's healthcare or... Yes. No, but he said that will be the goal. That will be the goal versus larger societal problems. Yeah. So even larger societal goal that gets piloted in academia, hopefully some of them will get commercialized. For example, new drugs that's been discovered and climate change solutions. But I...

If we don't invest in public sector, when are we going to have that? And also, on top of that, we need a check and balances. Who's going to assess in an unbiased way what this technology is? Who's going to open the hood and understand what it is? Let's even assume for a second, Cara, that sentient being is what we're creating.

Well, you need trusted public sector to understand what this is. There's a scene in your book where you describe running into some founders of OpenAI soon after its launch in 2015. One of them says, everyone doing research in AI should seriously question their role in academia going forward. Which founder said that?

I don't remember. I actually can't tell Larry and Sergey apart anymore. Anyway, but you write that you don't disagree with that quote, the future of AI would be written by those with corporate resources. And you, of course, were at Google for an amount of time. In 2015, OpenAI was still a nonprofit. What do you think of their decision to move to a capped profit model? And does it, as Elon Musk complained, feel like bait and switch or doesn't really matter? I'm not in the heads of the founders, but I think...

It didn't surprise me. Part of it is, you know, how do you sustain a capital-intensive operation where you're going after this kind of models?

I don't know how philanthropy can carry that. So it didn't surprise me. So what is the role then of the researchers in a corporate-led world that they can do just as well? You're not in a corporation, for example. Right now I'm not. But I was in Google and I'm still part of Silicon Valley ecosystem. First of all, innovation is great. I do believe in innovation. But no matter where you are, a corporate startup, universities,

you're also humans first. There's also human responsibilities. Everything we do in life needs to have a framework. And I do believe we need to have ethical human-centered framework. If you're in the corporate world building social media, what is the framework you believe in ensuring the health and mental health of our children? But

Dr. Li, you and I both know they weren't that concerned, or it was their last, it's on the stack ranking list. It was quite down low. And Google, of course, has been accused of censorship of several of its AI ethics researchers. This is why we need a healthy public sector to be watchful, right? Who's going to assess and evaluate this technology in an unbiased way if it's left to just one player, right?

it's guaranteed going to be biased. Right, and you did work on this at Google, but it hardly matters because it's what they decide the rules are, and they are unaccountable and unelected on anything. They just decide what they want to name. Some of them may be, I had an encounter with the founders of

Google about their search dominance. And they said, well, we're nice people. And I said, I'm not worried about you. I'm worried about the next person. And then, you know what I mean? I just don't know why you should have this much power. You were also at the center of another Google controversy regarding AI ethics, referring to Project Maven, Google's contract with the Department of Defense using AI to analyze video that could be used in targeting drone strikes. You were running that department. What did you learn from this backlash at Google? Yeah. So I

That was around 2018, right? I wasn't part of any of the business decision-making. But I learned that was the beginning of AI's coming of age to the society, and it's messy. A technology this powerful is messy. We cannot just purely look at it from a technology point of view, nor can we just look at it from...

Another one angle. Around that time, there were self-driving car deaths. There were horrific face recognition algorithm bias issues, privacy issues. So it's part of that early awakening, at least for some of us, that

We've entered an age that this technology is messy and human values are complex.

people come from all walks of life and they see this technology in different lights. And how do we, especially as technologists, not pretend we know it all? So in that vein, let's get to job displacement. One of Hinton's top fears about AA, he said it takes away the drudge work. It might take away more than that. You also had written there examples of trend toward automating the elements of jobs that are repetitive, error-prone, even dangerous. What's left are the creative, intellectual, emotional roles.

How do you see this playing out? What's your worry about job displacement? First of all, I'm not an economist. I want to give credit to my colleagues at the Stanford Digital Economy Lab who are under Stanford HAI and studying this. But here's the thing. This is a big issue. Humanity throughout our civilization has always faced this as technology disrupts the current ways of jobs and tasks.

You know, a labor market gets disrupted. Sometimes it's for the better. Many times it becomes bloody. And I think the jurors are still out there. What are the sectors most affected, would you say, off the top of your head? Right now, given the latest advances in AI technology, especially built upon language, believe it or not, it's knowledge sectors, knowledge workers.

Software engineers, which is one of the most coveted jobs in 21st century, is suddenly looking at, you know, a co-pilot, assistants, office assistants, paralegals. Some of this will be empowering. It's not taking away jobs per se. I've actually, believe it or not, talked to writer friends and artists who are so excited about

to have this tool. But in the meantime, for example, contact centers is global job.

You know, it is definitely going to face changes. So we need to be very careful. Sort of like what happened with farming or manufacturing. So misinformation. Another recent report found in the past year, at least 16 countries have used generative AI to sow doubt, smear opponents, influence public debate, which they were using with the old internet. Now they just have it on steroids. How do we deal with that? Because this is another thing that destabilizes societies. That's why I'm worried, Cara.

So we learned that the Ukrainian-Russian war now is the first information war. And even there, we've seen quite a bit of disinformation. Like you said, disinformation is an old thing in human society, but this one is being empowered by technology and it lowers the entry rate.

point of anyone using it. So I'm very worried about that. I think we need to look at this from a multi-dimensional way. There's technological solutions, for example, digital authentication. We cannot do this fast enough. I know I've got colleagues at Stanford doing this, but

whether you're a company who cares about the contents you produce, as well as academia, we need to get on this as fast as possible. But there has to be laws. There has to be international partnership. There has to be also general public engagement

Right. Like about where things come from. Yeah, exactly. And you cannot, laws cannot do all everything. Awareness and education is so important. Yeah. One of the things that someone told me that there's more information on the provenance of a pack of Oreos than there is on information because of the barcodes, like they can trace where it came from, what it was. And they were like, this is a pack of cookies. We should be able to do this here. We'll be back in a minute.

Let's turn to the positives, starting with health care. You mentioned drug development is obvious, but explain your focus on ambient intelligence and the practical applications of that. So I don't know if you had time to read my book. Yes, I did. One of the threads of my life is taking care of an ailing parent.

And because of that, I have, especially as a new immigrant, I have firsthand experience in healthcare from a very non-privileged point of view. We've been lacking health insurance, healthcare insurance for many years. We've gone through ER, ambulatory, surgical settings, ICUs. And what I learned in healthcare is a couple of things. One, human dignity matters.

is the most fragile thing. It doesn't even matter what medicine you're using, what technology you're using. Medical space is so vulnerable for human dignity. And for me, giving back or preserving as much human dignity as possible is one of the most important roles. This is your mom you're talking about. It's my mom's experience. But second thing I learned is labor. Are

America is not having excessive doctors or nurses or caretakers. It's the opposite. And on top of that, they're overworked, fatigued, and there are many situations in healthcare context that the patients are not being taken care of, not because of ill intentions, just lack of labor, lack of resource. So ambient intelligence is a way to use smart cameras to...

as extra pairs of eyes for our patients and caretakers. So to watch over if a patient has fallen out of bed, to have early detection of changes of conditions, to possibly help physical rehab at home

at home for patients to manage chronic conditions. These ambient sensors, whether it's cameras or microphones or wearables, can continuously discern information and package them into insight and being sent to doctors and nurses. So they know when they have to intercede. There's been a lot of that. People wear them themselves, but there are privacy concerns about ambient intelligence watching us all the time. I mean...

Every time Amazon wants to put a drone in your home, people go, hmm, maybe not so much. Absolutely. We have to confront this. In fact, our research team includes ethicists and legal scholars. Here's the thing. First of all, this is going to be a multi-stakeholder approach, right? There are moments patients want it. There are moments that...

the situation is so dire, we need to weigh the different sides. There are also technological solutions to really put privacy computing front and center. And this is something that my lab has been doing, whether it's on the machine learning end or on the securing the devices end or the network end and all this. So it's going to be...

multidimensional. You know, you are facing also people who don't even believe in vaccines, right? They think that's a surveillance vehicle. So that's a difficult thing. Another positive, education. Ignoring the dumb media frenzy about essay generation, which I'm tired of those stories. What's the argument for tech actually closing gap, AI especially closing gaps in education and deepening learning? So, Cara, when TechGPT came, I was testing it myself. My first knee-jerk reaction is,

my God, this should be the biggest moment in education sector, not because of fake essays, it's because of what kind of children are we making? Because very soon, or maybe already, AI algorithm can pass tests

that are focusing on memorization and regurgitation. Yet human intelligence, human capital is creativity, innovation, compassion, and all those much more complex things humans are capable of.

I really hope the entire education sector, especially K-12, also college level, is taking this moment and realizing we need to reassess education and think, again, human first.

Put the children first and think about how we can use this tool to superpower learning, superpower teaching. In the meantime, rethink about education so that our children become much more creative. Is there an area you're not thinking about? I would say autonomous vehicles, climate change that you're thinking when you're thinking about the applications of advanced AI, what you think would be the most groundbreaking? To me, a scientific discovery. I think...

Everything from new materials to curing diseases to even as remote as piecing together old relics of, you know, archaeological sites. Right now it's all done by PhD students and their professors. All this will be aided by AI technology.

And that's exciting. Yeah. All right. So, but the problem is the state of the industry and who holds power in AI right now. There's a number of things. One is the importance of diversity in the AI workforce. It's something you work on with your organization, AI for All. This has been tried all over technology, this idea of diversity and who's working on it. You have your own interesting introduction to this world, your personal story that influenced your work in Viewpoint and advocating for more diversity, but it doesn't happen often.

Talk a little bit about the need for this. And at the same time, I think you have to acknowledge it just hasn't happened. It's an uphill battle, Cara. You experience this as much as I do. You know, to this day, I'm often the only one or very few women in the room. And oftentimes I'm not in the room. And look at who holds the megaphone. But

Compared to Ada Loveless, it's been better. It's a low bar there. I know, but it's a battle we cannot give up because this technology is too important not to include everyone. When I say

Everyone, I mean both the creation of the technology as well as you use the word decision-making, who is the decision-maker, as well as those who hold this technology accountable. So I do think we have to continue to chip away at this.

the awareness is really important and the empowerment of people from different background is so important. You're more confident than I. I feel like over and over again, they continue to need to... Cara, I don't know if I'm confident. I just don't give up. That's where I am. Because look, I'm sure you face the same thing. If you and I give up,

Where is the next young teenage girl going to look up to? I get that. I understand that. I am lucky because among reporters, I suppose I would be one of the most powerful people in the room. So that makes it easier. But at the same time, it's so clear that the leadership is so homogeneous. It's...

And still has not, the needle has not moved. It's gotten worse. Actually, I would say in AI, the needle has probably moved to the worse because of the rapid concentration of power. Yeah, right. But overall, again, Ada Lovelace is a low bar, but Grace Hopper, you know, we...

We just have to keep working at it. I guess. You saw what happened at the Grace Opera Conference. A bunch of men invaded it this year to get jobs. I mean, really, of course they did. It's a women's conference, and therefore they should be there at the front of the line. But does there have to be a USTA technology, not Tennis Association, or a government agency that plays a role in reconciling the public?

Versus private debate, there's been talk about agencies around AI, around technology. There still isn't one. Okay, so this is an interesting topic. We can talk about this. I don't know yet I feel there should be an agency for the following reasons. This technology is extremely horizontal.

many parts of it should be, in my opinion, in the existing framework where the rubber meets the road, like the FDA, like the SEC. So in every agency. Right. So every agency should be very vigilant and have a sense of urgency to update what they have for this technology. Now, it's possible that...

Even with all existing framework, we cannot cover it all. And when that happens, I think it's possible we should look at what agencies we should create in addition. But I guess if you're asking me my point of view, right now, I think there's more urgency to figure out the existing frameworks because it's

I don't know about you. Actually, you're in journalism. I really wake up worrying to hear the news. We'll see the first death of self-diagnosis using drugs.

Chat GPT. I don't know if it has happened yet or it's been reported yet. Although people were still on the internet doing that themselves already. Yeah, but again, we're talking about lowering the bar, right? Right. Just like disinformation. It's just easier now. Right. So some of these rubber meets the road scenario, we've got agencies and they just...

Oh, they need to move faster. Well, that's what they say. Talk about the idea of advancing. This summer, you're part of a group that met with President Biden to discuss AI. You told him to take a moonshot mentality, which is a Google word, by the way. That's their favorite word over there. Well, it's a JFK word. Yes, I get it. It's a JFK. I know it is. But they love to say the word moonshot every five minutes. But what does that mean to you?

What it means to me... I'll give it to JFK, but what does it mean to you? At least I was a physics major and I think about, you know, reaching the moon and beyond.

I think it means to me the kind of public sector investment that is so important to usher in the next generation of technology for scientific discovery. Such as Kennedy with the space program. Yeah, as well as the kind of...

public participation and evaluation of this technology. So it's both for innovation as well as evaluation and framework in this technology. Sure. What have you gleaned from your interactions with the White House on AI so far? Okay. So I've got, like you say, you began this talk with the 2018 op-ed. At that time, I don't think

many of them care, dare I say, or it's still only a few people are thinking about it. Fast forward to 2023, I've been to DC a few times, there's so much more talks about this. So it has reached to the level of consciousness. I still think we urgently need to help our policymaker to understand what this is. And to that end, Stanford HAI is working

hosting as much as we can educational programs. We're creating policy briefs. We're doing boot camps. We're having, you know, closed-door training sessions for the executive branch because we have to begin with giving them proper information of what this is. So let's talk about legislation. Your Stanford Institute supports the CREATE AI Act. First of all, explain the bill and what's notable. Why are you supporting it?

This bill is, if passed, will create a public sector AI research cloud and data repository. What that means is that universities across the country...

think tanks and non-profits will be able to access more compute and possibly more data to do AI research. We just talked about bottom-up research, right? If I'm a cancer patient

who's looking at a rare disease, not your big cancers, it's very hard for me to get funding, to get industry support, to get philanthropy. But then I can hopefully get to this cloud and use the compute and some of the data coming from NIH or whatever, CMS, to do my research. This bill is around $2.6 billion over six years.

for the public sector. Microsoft gave OpenAI $10 billion, I think, of compute. Amazon just gave Anthropic four. Four, yeah. Small. This is small, but...

It can move the needle. All right, before we leave, I need to ask you about Twitter. Sorry. You served on this board from May 2020 to October. I know there's not as much as you could say. I get it. They were tumultuous years for the company, but even before Elon's bid, you know I've talked about this, that it was a troubled company for a long time. What was your impact on the board? What did you hope to accomplish by joining that board? And how do you rate your success?

I don't think it was a high rate for the same reason you talked about. So here's the real story, right? Parag invited me. This is the former CEO. Well, he was a CTO. CTO, right. Right. Well, Parag, before he was a CEO, he was actually CTO. He's a Stanford CS alumni, PhD alumni. So he and I talked about big data. He talked about

different aspects of using machine learning techniques. I mean, really as mundane as advertisement optimization to other aspects.

And it's under that technological premise I joined. I was very happy to be on the board when Twitter established its ethical team, Ruman and her colleagues. I was...

participating in more the technological side of the discussion. But also, you know, I did see my role as someone with human-centered belief of technology. But there are far greater forces involved.

that dominated the company in the last couple of years. Right. Well, yeah, the new owner. Can you talk a little bit about how the board went from a poison-filled tactic to ward him off to accepting his price? I think you had no choice from a shareholder point of view. It was such a high price.

Yeah, so that, Cara, truly, I'm sure it's public knowledge. There is a within board committee that led this effort. And not surprisingly, I was not on this committee. I did not have the expertise. And frankly, I...

My understanding was that's my fiduciary duty to look at... That price. We can argue if it should be that way, but...

It was that way. It is that way. And that's what it is. But really, the juicy details, I was not part of. You were not part of. But did you feel regret in handing over the company? I know at the time I felt someone's got to do something here because it was bumping along as a company. I thought he could possibly do a good job. I thought he overpaid. And especially when he started with his antics of not buying it.

Then you started to see the trouble. How do you feel now afterwards? So I like the word public square. And I think part of the reason they also picked me as a board member is I actually use Twitter, right? So I did use and I still do use it. Yeah, most of the board didn't for people.

Much of the board did not. Use it as a public square. But it's actually a very philosophical word. I know it came from the ancient Greece and where, you know, debates and discussion. A public square is public.

But then if you really look at the ownership of even public square, who owns a public square? Governments tend to own public square. So what does it mean a private company is a public square? I agree. Where are the boundaries? Especially now a private company is really a private citizen, right? So it's actually a very philosophical issue. Do I regret? I don't know. I'm still using it.

I want to use it as a public square, but I don't know whether it is or not. I really, it's, I don't have a, I don't feel I have a strong sense. How do you assess his stewardship so far? He has cut those ethical people. He has cut the trust and safety people, said they're not necessary because it's a public square.

I think every company, every organization, every country needs frameworks, needs norms. And these frameworks should be maximizing multi-stakeholder well-being. Well-being includes many things, financial freedoms of speech, dignity, and all that. And I think we have a long way to go.

Yeah. Do you have hopes for it under him? Early AI investor thinks big thoughts for sure. Absolutely. Absolutely. Let me say I'm going to keep my hope up and observe. And observe. Yeah. Any assessment so far? I'm not an avid user. Never I had been so avid.

So I don't think I'm the best person to assess. We'll see. I agree with you. There's a need of a public place, although I don't think you can have a public square owned by a private company, and especially when it's run by one person. I think that's called a dictatorship. That's what I used to call it in any case. Anyway, one last question. I'm curious about what your outlook on the future is. You have young children, so do I. Do you feel hopeful or hopeless for them, and why?

So people ask me this question a lot, and my answer is always the following. I'm only hopeful when I feel I'm part of participating in the change. Or if many people like me, like you, were not powerful, were not powerful,

extremely rich by any measure. If people like us feel we have no longer any say, any way to participate, then the hope will be gone. But right now, I still feel as a researcher, as a technology leader, I'm working. I'm working with students who always make me feel hopeful. I'm working with

civil society. I'm working with policymakers. I'm working with industry. And right now, I'm still holding the hope. But I feel it's a, maybe it's just my personality. I'm not letting go the work. Therefore, I'm hopeful. But if I feel there's no place to people like me to participate, then that's the beginning of trouble. This is why I wanted to write the book.

I wanted to encourage all the young people, especially from different backgrounds, to join us and feel the hope through their own doing. Well, you have been a pioneer and an inspiration for a lot of people, I think, more than you realize. Dr. Lee, thank you so much. I'm excited to see more of your work. It's a great book. Thank you, Cara. Thank you so much.

She still has hopes for Elon. What do you think of that? Well, all tech people, they can't help themselves. I mean, she has to. Like, what is she going to do? I don't think she has any admiration for him, for sure. She's a very kind person in general and very caring about the human race. And I think Elon cares about saving the human race, but

individual humans are more problematic for him. Her point about public squares was really interesting in that conversation about Twitter. What is a public square? What defines it? And is it a public square if it's owned by a private company? Which is a similar articulation to what Naomi Klein

in some ways. Well, we've said it for years. I've been writing that for years. It's not a public square. It's owned by private companies. They can make their own rules over and over and over again. It's hard to... I know people think it is. It just isn't. It just isn't. It's owned by giant private corporations. It's a city owned by giant corporations. And certain voices get amplified. Others get drowned out. So the idea that it's a public square of equal opportunity is...

Certainly not the case. Certainly not in some places these days, Kara, as you've been. Right, yeah. I mean, I think what's important is that the government get reengaged in certain things that are critical for our country. Some things aren't solved by tech, and they're important that government is there because it represents the people. Even if they do it badly, they represent the people.

The sums of money that the government is putting into regulating AI or investing in AI versus the sums of money that these private companies can put in. And that was her, I loved it. She said, there are some immediate catastrophic risks. Yeah, I don't like when they drag out the word catastrophic, but just some immediate catastrophic risk, don't worry. And the main one was this kind of squeezing out of the public sector and specifically universities. What I appreciated about

Her, she talks a big game about accountability and accountability.

about the people, the humanity of this. And then we've asked a lot of people the question, do you feel some accountability? Do you feel responsible? I mean, that's a common journalistic question. A lot of people beat around that. And her answer was, yes, I do. Yeah, she does. She does. Well, it's hard. I think a lot of people who create things don't understand what later happens to them. She's at least thoughtful about it. And you can't deny it. There's pluses and minuses. And I think she's just, she's an adult. That's what she is. That's all. Adults know how to do that.

She has also worked hard to increase diversity in tech and to get more people like her, you know, empathetic to the point, into the room. She's often the only woman in the room. I remember in the prep for this reading a 2018, that Wired article, which she is the only woman in the room. She is. And are you hopeful for that to change? I know you're not hopeful about Elon, but are you hopeful for that to change? No, I'm not. I've written this story for 20 years.

20 years about the problems and the numbers, and it hasn't changed. What do you think gets it to change? Nothing. I don't think it does. I don't think this industry... Do you think over the course of generations, more... I don't think this industry is committed to diversity in a significant way, no. I don't know why they would be. They like themselves. As I've said hundreds of times over the past two decades, it's a meritocracy, not a meritocracy. And that's what they like, and that's the way it's going to go. I'm actually more hopeful on this, I think, especially as...

people are changing what they study in schools. Other countries are pushing, you know, and in some cases is advancing beyond the United States in terms of the number of people studying new technologies or in starting companies. I think there could be a real change in the room because you don't need, you have to get to a point where you don't need to rely on someone else to give you a role at their company and give you a promotion. You,

Where you can start your own thing. You can be as hopeful as you want. The numbers are declining. From many, many years ago, absolutely, there were more women all over tech. They're declining. CEOs, there's Lisa Su now, I think, or Dr. Su. It's gotten worse in terms of diversity. And as new things come in, AI is dominated by the same people. And robotics, dominated by the same people. All the areas of the future are, and dominated by big companies. It has not changed.

I think that's the case in a lot of the world, not just in tech. Nope. But that sounds like Vinod Khosla to me. That's his argument. Everyone's terrible. That's not really a good thing. Well, I'm not saying it as an excuse. I'm saying it as a, you know, if we can change it, we should try to change it everywhere. Yeah. Anyways, read us out, Cara.

Today's show was produced by Naima Raza, Christian Castro-Rossell, and Megan Burney. Special thanks to Kate Gallagher and Claire Teague. Our engineers are Fernando Arruda and Rick Kwan. Our theme music is by Trackademics. If you're already following the show, Fei-Fei Li is right about AI. If not, get ready for Cyberdyne Systems' Hasta La Vista, baby.

Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network, and us. We'll be back on Thursday with more.