cover of episode You Might Also Like: Smart Talks with IBM

You Might Also Like: Smart Talks with IBM

2024/10/17
logo of podcast Nobody Should Believe Me

Nobody Should Believe Me

Chapters

Dr. David Cox explains the concept of foundation models in AI, their origins, and their significance in modern AI research and applications.
  • Foundation models are large, pre-trained models that can be fine-tuned for specific tasks.
  • IBM and MIT have a long history of collaboration in AI research.
  • Foundation models reduce the labor-intensive nature of traditional AI development.

Shownotes Transcript

Hello, hello. Welcome to Smart Talks with IBM, a podcast from Pushkin Industries, iHeartRadio, and IBM. I'm Malcolm Gladwell. This season, we're continuing our conversation with new creators, visionaries who are creatively applying technology in business to drive change, but with a focus on the transformative power of artificial intelligence and what it means to leverage AI as a game-changing multiplier for your business.

Our guest today is Dr. David Cox, VP of AI Models at IBM Research and IBM Director of the MIT-IBM Watson AI Lab, a first-of-its-kind industry-academic collaboration between IBM and MIT focused on the fundamental research of artificial intelligence.

Over the course of decades, David Cox watched as the AI revolution steadily grew from the simmering ideas of a few academics and technologists into the industrial boom we are experiencing today. Having dedicated his life to pushing the field of AI towards new horizons, David has both contributed to and presided over many of the major breakthroughs in artificial intelligence.

In today's episode, you'll hear David explain some of the conceptual underpinnings of the current AI landscape. Things like foundation models in surprisingly comprehensible terms, I might add. We'll also get into some of the amazing practical applications for AI in business, as well as what implications AI will have for the future of work and design. David spoke with Jacob Goldstein, host of the Pushkin podcast, What's Your Problem?,

a veteran business journalist, Jacob has reported for the Wall Street Journal, the Miami Herald, and was a longtime host of the NPR program, Planet Money. Okay, let's get to the interview. Tell me about your job at IBM.

So I wear two hats at IBM. So one, I'm the IBM director of the MIT IBM Watson AI Lab. So that's a joint lab between IBM and MIT where we try and invent what's next in AI. It's been running for about five years. And then more recently, I started as the vice president for AI models. And I'm in charge of building IBM's foundation models, you know, building these big models, generative models that allow us to have all kinds of new, exciting capabilities in AI.

So I want to talk to you a lot about foundation models, about generative AI. But before we get to that, let's just spend a minute on the IBM-MIT collaboration. Where did that partnership start? How did it originate? Yeah, so actually, it turns out that MIT and IBM have been collaborating for a very long time in the area of AI. In fact...

The term artificial intelligence was coined in a 1956 workshop that was held at Dartmouth. It was actually organized by an IBMer, Nathaniel Rochester, who led the development of the IBM 701. So we've really been together in AI since the beginning. And as AI kept accelerating more and more and more,

I think there was a really interesting decision to say, let's make this a formal partnership. So IBM in 2017 announced it would be committing close to a quarter billion dollars over 10 years

to have this joint lab with MIT. And we located ourselves right on the campus, and we've been developing very, very deep relationships where we can really get to know each other, work shoulder to shoulder, conceiving what we should work on next, and then executing the projects. And it's really, you know, very few entities like this exist between academia and industry. It's been really fun over the last five years to be a part of it.

And what do you think are some of the most important outcomes of this collaboration between IBM and MIT? Yeah, so we're really kind of the tip of the spear for IBM's AI strategy. So we're really looking at what's coming ahead. And in areas like foundation models, as the field changes—

MIT people are interested in working on, faculty, students, and staff are interested in working on what's the latest thing, what's the next thing. We at IBM Research are very much interested in the same. So we can put out feelers, interesting things that we're seeing in our research,

interesting things we're hearing in the field, we can go and chase those opportunities. So when something big comes, like the big change that's been happening lately with foundation models, we're ready to jump on it. That's really the purpose. That's the lab functioning the way it should. We're also really interested in

How do we advance AI that can help with climate change or build better materials and all these kinds of things that are a broader aperture sometimes than what we might consider just looking at the product portfolio of IBM. And that gives us, again, a breadth where we can see connections that we might not have seen otherwise. We can think things that help out society and also help out our customers. So the last, whatever, six months, say, there has been this

wild rise in the public's interest in AI, right? Clearly coming out of these generative AI models that are really accessible, you know, certainly chat GPT, language models like that, as well as models that generate images like mid-journey. I mean, can you just sort of briefly talk about the breakthroughs in AI that have made this moment feel so exciting, so revolutionary for artificial intelligence? Yeah.

Yeah, you know, I've been studying AI basically my entire adult life. Before I came to IBM, I was a professor at Harvard. I've been doing this a long time, and I've gotten used to being surprised. It sounds like a joke, but it's serious. Like, I'm getting used to being surprised at the acceleration of the pace of

Again, it tracks actually a long way back. There's lots of things where there was an idea that just simmered for a really long time. Some of the key math behind the stuff that we have today, which is amazing, there's an algorithm called backpropagation, which is sort of key to training neural networks. That's been around since the 80s in wide use.

And really what happened was it simmered for a long time, and then enough data and enough compute came. So we had enough data because...

We all started carrying multiple cameras around with us. Our mobile phones have all these cameras, and we put everything on the internet, and there's all this data out there. We caught a lucky break that there was something called a graphics processing unit, which turns out to be really useful for doing these kinds of algorithms, maybe even more useful than it is for doing graphics. They're great at graphics too. And

Things just kept kind of adding to the snowball. So we had deep learning, which is sort of a rebrand of neural networks that I mentioned from the 80s. And that was enabled again by data because we digitalized the world and compute because we kept building faster and faster and more powerful computers.

And then that allowed us to make this big breakthrough. And then more recently, using the same building blocks, that inexorable rise of more and more and more data met a technology called self-supervised learning, where the key difference there in traditional deep learning for classifying images, like is this a cat or is this a dog in a picture? Those technologies were

requires supervision. So you have to take what you have and then you have to label it. So you have to take a picture of cat and then you label it as a cat. And it turns out that that's very powerful, but it takes a lot of time to label cats and to label dogs. And there's only so many labels that exist in the world. So what really changed more recently is

is that we have self-supervised learning where you don't have to have the labels. We can just take unannotated data. And what that does is it lets you use even more data. And that's really what drove this latest sort of rage. And then all of a sudden we start getting these really powerful models. And then really, this has been simmering technologies, right? This has been happening for a while.

and progressively getting more and more powerful. One of the things that really happened with ChatGPT and technologies like stable diffusion and mid-journey was that they made it

visible to the public. You put it out there, the public can touch and feel, and they're like, wow, not only is there palpable change and, wow, I can talk to this thing, wow, this thing can generate an image. Not only that, but everyone can touch and feel and try. My kids can use some of these AI art generation technologies, and that's really just

launched, you know, it's like a propelled slingshot at us into a different regime in terms of the public awareness of these technologies. You mentioned earlier in the conversation foundation models, and I want to talk a little bit about that. I mean, can you just tell me, you know, what are foundation models for AI and why are they a big deal? Yeah, so this term foundation model was coined by a group at Stanford called

And I think it's actually a really apt term because remember I said, you know, one of the big things that unlocked this latest excitement was the fact that we could use large amounts of unannotated data. We could train a model. We don't have to go through the painful effort of labeling each and every example. You still need to have your model do something you want it to do. You still need to tell it what you want to do. You can't just have a model that doesn't have any purpose. But what a foundation model is, it provides...

a foundation, like a literal foundation. You can sort of stand on the shoulders of giants. You can have one of these massively trained models and then do a little bit on top. You know, you could use just a few examples of what you're looking for and you can get what you want from the model.

So just a little bit on top now gets you the results that a huge amount of effort used to have to put in, you know, to get from the ground up to that level. I was trying to think of an analogy for sort of foundation models versus what came before. And I don't know that I came up with

a good one, but the best I could do was this. I want you to tell me if it's plausible. It's like before foundation models, it was like you had these sort of single-use kitchen appliances. You could make a waffle iron if you wanted waffles, or you could make a toaster if you wanted to make toast. But a foundation model is like an oven with a range on top. So it's like this machine, and you could just cook anything with this machine. Yeah, that's a great analogy. They're very versatile.

The other piece of it, too, is that they dramatically lower the effort that it takes to do something that you want to do. And sometimes I used to say about the old world of AI, I would say, you know, the problem with automation is that it's too labor-intensive, which sounds like I'm making a joke. Indeed. Famously, if automation does one thing, it substitutes

machines or computing power for labor, right? So what does that mean to say AI or automation is too labor-intensive? It sounds like I'm making a joke, but I'm actually serious. What I mean is that the effort it took in the old regime to automate something was very, very high. So if I need to go and...

curate all this data, collect all this data, and then carefully label all these examples, that labeling itself might be incredibly expensive and time consuming. And we estimate anywhere between 80 to 90% of the effort it takes to feel the AI solution actually is just spent on data. So that has some consequences, which is the threshold for bothering. If you're going to only get a little bit of value back

from something. Are you going to go through this huge effort to curate all this data? And then when it comes time to train the model, you need highly skilled people that might be expensive or hard to find in the labor market.

you know, are you really going to do something that's just a tiny little incremental thing? No, you're going to do only the highest value things that warrant that level of investment. Because you have to essentially build the whole machine from scratch. And there aren't many things where it's worth that much work to build a machine that's only going to do one narrow thing.

That's right. And then you tackle the next problem and you basically have to start over. And, you know, there are some nuances here, like for images, you can pre-train a model on some other task and change it around. So there are some examples of this like non-recurring cost model.

that we have in the old world too. But by and large, it's just a lot of effort, it's hard, it takes a large level of skill to implement. One analogy that I like is, think about it as you have a river of data running through your company or your institution.

traditional AI solutions are like building a dam on that river. Dams are very expensive things to build. They require highly specialized skills and lots of planning. You're only going to put a dam on a river that's big enough, that you're going to get enough energy out of it that it was worth your trouble.

You're going to get a lot of value out of that dam if you have a river like that, a river of data. But actually, the vast majority of the water in your kingdom actually isn't in that river. It's in puddles and creeks and babbling bricks. And there's a lot of value left on the table because it's like, well, there's nothing you can do about it. It's just that that's too...

low value, so it takes too much effort. So I'm just not going to do it. The return on investment just isn't there. So you just end up not automating things because it's too much of a pain. Now, what foundation models do is they say, well, actually, no, we can train a base model, a foundation that you can work on. We don't care. We have to specify what the task is ahead of time. We just need to learn about the domain of data. So if we want to build something that can understand English language,

There's a ton of English language text available out in the world. We can now train models on huge quantities of it. And then it learned the structure, learned how language, you know, a good part of how language works on all that unlabeled data. And then when you roll up with your task, you know, I want to solve this particular problem, you don't have to start from scratch. You're starting from a very, very, very high level.

place. So that just gives you the ability to, you know, now all of a sudden everything is accessible. All the puddles and creeks and babbling brooks and kettle ponds, you know, those are all

accessible now. And that's very exciting. But it just changes the equation on what kinds of problems you could use AI to solve. And so foundation models basically mean that automating some new task is much less labor-intensive. The sort of marginal effort to do some new automation thing is much lower because you're building on top of the foundation model rather than starting from scratch. Absolutely. So that is...

That is like the exciting good news. I do feel like there's a little bit of a countervailing idea that's worth talking about here. And that is the idea that even though there are these foundation models that are really powerful, that are relatively easy to build on top of, it's still the case, right, that there is not some one-size-fits-all foundation model. So, you know, what does that mean and why is that important to think about in this context?

Yeah, so we believe very strongly that there isn't just one model to rule them all. There's a number of reasons why that could be true. One, which I think is important and very relevant today, is how much energy these models can consume. So these models can get very, very large. So

One thing that we're starting to see or starting to believe is that you probably shouldn't use one giant sledgehammer model to solve every single problem. We should pick the right size model to solve the problem. We shouldn't necessarily assume that we need the biggest, baddest model for every little use case.

And we're also seeing that small models that are trained to specialize on particular domains can actually outperform much bigger models. So bigger isn't always even better. So they're more efficient and they do the thing you want them to do better as well.

That's right. So Stanford, for instance, a group of Stanford trained a model. It was a 2.7 billion parameter model, which isn't terribly big by today's standards. They trained it just on the biomedical literature. You know, this is the kind of thing that universities do. And what they showed was that this model was better at answering questions about the biomedical literature than some models that were, you know, 100 billion parameters, you know, many times larger.

So it's a little bit like asking an expert for help on something versus asking the smartest person you know. The smartest person you know may be very smart, but they're not going to beat expertise. And then as an added bonus, this is now a much smaller model. It's much more efficient to run. It's cheaper. So there's lots of different advantages there. So I think we're going to see attention increase.

in the industry between vendors that say, hey, this is the one big model, and then others that say, well, actually, there's lots of different tools we can use that all have this nice quality that we outlined at the beginning, and then we should really pick the one that makes the most sense for the task at hand. So there's sustainability, basically, efficiency. Another kind of set of issues that come up a lot with AI are bias, hallucination, and

Can you talk a little bit about bias and hallucination, what they are and how you're working to mitigate those problems? Yeah. So there are lots of issues still. As amazing as these technologies are, and they are amazing, let's be very clear, lots of great things we're going to enable with these kinds of technologies. Bias isn't a new problem. So, you know,

Basically, we've seen this since the beginning of AI. If you train a model on data that has a bias in it, the model is going to recapitulate that bias and it provides its answers. So every time, you know, if all the texts you have says, you know, it's more likely to refer to female nurses and male scientists, then you're going to get models that, you know, for instance, there was an example where a machine learning based translation system translated from Hungarian to English and

Hungarian doesn't have gendered pronouns, English does. And when you asked it to translate, it would translate, "They are a nurse" to "She is a nurse." It would translate "They are a scientist" to "He is a scientist." And that's not because the people who wrote the algorithm were building in bias and coding in like, "Oh, it's got to be this way." It's because the data was like that. We have biases in our society and they're reflected in our data, in our text, in our images everywhere.

And then the models, they're just mapping from what they've seen in their training data to the result that you're trying to get them to do and to give. And then these biases come out. So there's a very active...

program of research. And we do quite a bit at IBM Research and MIT, but also all over the community and industry and academia trying to figure out how do we explicitly remove these biases? How do we identify them? How do we build tools that allow people to audit their systems to make sure they aren't biased?

So this is a really important thing. And again, this was here since the beginning of machine learning and AI, but foundation models and large language models and generative AI just bring it into even sharper focus because there's just so much data and it's sort of building in, baking in all these different biases we have. So that's absolutely a problem that these models have. Another one that you mentioned was hallucinations.

Even the most impressive of our models will often just make stuff up. The technical term that was chosen is hallucination. To give you an example, I asked Chat-TBT to create a biography of David Cox at IBM.

And it started off really well. They identified that I was the director of the MIT-IBM Watson AI Lab and said a few words about that. And then it proceeded to create an authoritative but completely fake biography of me where I was British, I was born in the UK.

I went to British universities in the UK. It's the authority, right? It's the certainty that is weird about it, right? It's dead certain that you're from the UK, etc. Absolutely, yeah. It has all kinds of flourishes, like I won awards in the UK. So, yeah, it's problematic because it kind of pokes at a lot of weak spots in our human psychology, where if something sounds coherent...

We're likely to assume it's true. We're not used to interacting with people who eloquently and authoritatively, you know, admit complete nonsense. Yeah. You know, we could debate about that. Yeah, we could debate about that. But yes, it's sort of blithe confidence that

uh, throws you off when you realize it's completely wrong. Right. That's right. And, and we do have a little bit of like a great and powerful Oz sort of vibe going sometimes where we're like, well, you know, the AI is all knowing and therefore whatever it says must be true. But, but these things will make up stuff, you know, very, uh, aggressively. Um, and you know, everyone could try asking it for their, their bio. You, you, you'll, you'll get something that

You'll always get something that's of the right form, that has the right tone, but the facts just aren't necessarily there. So that's obviously a problem. We need to figure out how to close those gaps, fix those problems. There's lots of ways we could use them much more easily. I'd just like to say, faced with the awesome potential of what these technologies might do, it's a bit encouraging to hear that even chat GPT has a weakness for inventing flamboyant, if fictional, versions of people's lives.

And while entertaining ourselves with chat GPT and mid-journey is important, the way laypeople use consumer-facing chatbots and generative AI is just fundamentally different from the way an enterprise business uses AI. How can we harness the abilities of artificial intelligence to help us solve the problems we face in business and technology? Let's listen on as David and Jacob continue their conversation.

We've been talking in a somewhat abstract way about AI and the ways it can be used. Let's talk in a little bit more of a specific way. Can you just talk about some examples of business challenges that can be solved with automation, with this kind of automation we're talking about?

Yeah. So really the sky's the limit. There's a whole set of different applications that these models are really good at. And basically it's a superset of everything we used to use AI for in business. So the simple kinds of things are like, hey, if I have text and I have product reviews and I want to be able to tell if these are positive or negative, let's look at all the negative reviews so we can have a human look through them and see what was up.

Very common business use case. You can do it with traditional deep learning based AI. So there's things like that that are, you know, it's very prosaic, sort of, we're already doing it, we've been doing it for a long time. Then you get situations that were harder for the old AI. Like if I want to compress something, like I have like, say I have a chat transcript, like a customer called in and they had a complaint.

They call back. Okay, now a new person on the line needs to go read the old transcript to catch up. Wouldn't it be better if we could just summarize that? Just condense it all down, quick little paragraph, you know, customer call, they're upset about this rather than having to read the blow by blow. There's just lots of settings like that where summarization is really helpful. Hey, you have a meeting.

And I'd like to just automatically have that meeting or that email or whatever. I'd like to just have it condensed down so I can really quickly get to the heart of the matter. These models are really good at doing that.

They're also really good at question answering. So if I want to find out what's, how many vacation days do I have? I can now interact in natural language with a system that can go and that has access to our HR policies. And I can actually have a, you know, multi-turn conversation where I can, you know, like I would have with, you know, somebody, you know, an actual client.

HR professional or customer service representative. So a big part of what this is doing is it's putting an interface. When we think of computer interfaces, we're usually thinking about UI, user interface elements where I click on menus and there's buttons and all this stuff. Increasingly now, we can just

You just, in words, you can describe what you want. You want to ask a question. You want to sort of command the system to do something. Rather than having to learn how to do that clicking buttons, which might be inefficient now, we can just sort of spell it out. Interesting, right? The graphical user interface that we all sort of default to, that's not like the state of nature, right? That's a thing that was invented and just came to be the standard way that we interact with computers. And so you could imagine, as you're saying, like,

Chat, essentially, chatting with the machine could become a sort of standard user interface, just like the graphical user interface did, you know, over the past several decades. Absolutely. And I think those kinds of conversational interfaces are going to be

important for increasing our productivity. It's just a lot easier if I don't have to learn how to use a tool or I don't have to kind of have awkward interactions from the computer. I can just tell it what I want and it can understand. It could potentially even ask questions back to clarify and have those kinds of conversations. That can be extremely powerful. And in fact, one area where that's going to, I think, be absolutely game-changing is in code. When we write code, programming languages are a

a way for us to sort of match between our very sloppy way of talking and the very exact way that you need to command a computer to do what you want it to do. They're cumbersome to learn. You create very complex systems that are very hard to reason about.

And we're already starting to see the ability to just write down what you want and AI will generate the code for you. And I think we're just going to see a huge revolution of like, we just converse, we can have a conversation to say what we want, and then the computer can actually not only do fixed actions and do things for us, but it can actually even write code to do new things, you know, and generate software itself. Given how much software we have, how much craving we have for software, like we'll never have enough software in our world.

The ability to have AI systems as a helper in that, I think we're going to see a lot of value there. So if you think about the different ways AI might be applied to business, I mean, you've talked about a number of the sort of classic use cases. What are some of the more out there use cases? What are some unique ways you could imagine AI being applied to business?

Yeah, there's really the sky's the limit. I mean, we have one project that I'm kind of a fan of where we actually were working with a mechanical engineering professor at MIT working on a classic problem. How do you build linkage systems, which are like, you know, imagine bars and joints and motors, you know, the things that are in your car. Building a thing, building a physical machine of some kind. Yeah, like real metal and, you know.

19th century, just old school industrial revolution. Yeah, but the little arm that's holding up my microphone in front of me, cranes that build your buildings, parts of your engines. This is like classical stuff. It turns out that humans, if you want to build an advanced system, you decide what curve you want to create. And then a human together with a computer program can build a five or six bar linkage. And then that's kind of where you top out. It gets too complicated to work with.

more than that. We built a generative AI system that can build 20-bar linkages, like arbitrarily complex. These are machines that are beyond the capability of a human to design themselves. Another example, we have an AI system that can generate electronic circuits. We had a project where we were working where we were building better power converters, which allow our computers and our devices to be more efficient, save energy, less carbon output.

I think the world around us has always been shaped by technology. If you look around, just think about how many steps and how many people and how many designs went into the table and the chair and the lamp. It's really just astonishing. And that's already the fruit of automation and computers and those kinds of tools. But we're going to see that increasingly be a product also of AI. And so it's going to be everywhere around us. Everything we touch is going to have been

helped in some way to get to you by AI. You know, that is a pretty profound transformation that you're talking about in business. How do you think about the implications of that, both for the sort of, you know, business itself and also for employees?

Yeah, so I think for businesses, this is going to cut costs, make new opportunities, delight customers. There's just sort of all upside, right? For the workers, I think the story is mostly good, too. How many things do you do in your day that

you'd really rather not, right? And we're used to having things we don't like automated away. If you didn't like walking many miles to work, then you can have a car and you can drive there. We used to have a huge fraction, over 90% of the US population engaged in agriculture and then we mechanized it. Now very few people work in agriculture. A small number of people can do the work of a large number of people. And then

And then, you know, things like email and, you know, they've led to huge productivity enhancements because I don't need to be writing letters and sending them in the mail. I can just instantly communicate with people. We just become more effective. Like our jobs have transformed dramatically.

Whether it's a physical job like agriculture or whether it's a knowledge worker job where you're sending emails and communicating with people and coordinating teams, we've just gotten better. And, you know, the technology has just made us more productive. And this is just another example. Now, you know, there are people who worry that, you know, we'll be so good at that, that maybe jobs will be displaced. And that's a legitimate concern. But just like...

how in agriculture, it's not like suddenly we had 90% of the population unemployed. People transitioned to other jobs. And the other thing that we found too is that our appetite for doing more things as humans is sort of insatiable. So even if we can dramatically increase how much one human can do,

That doesn't necessarily mean we're going to do a fixed amount of stuff. There's an appetite to have even more. So we're going to continue to grow the pie. So I think at least certainly in the near term, you know, we're going to see a lot of drudgery go away from work. We're going to see people be able to be more effective at their jobs. You know, we will see some transformation in jobs and what they look like, but we've seen that before. And the technology at least has the potential to make our lives a lot easier.

So, IBM recently launched Watson X, which includes WatsonX.ai. Tell me about that. Tell me about, you know, what it is and the new possibilities that it opens up.

Yeah, so WatsonX is obviously a bit of a new branding on the Watson brand. T.J. Watson, that was the founder of IBM, and our AI technologies have had the Watson brand. WatsonX is a recognition that there's something new. There's something that actually has changed the game. We've gone from this old world of automation that's too labor-intensive to this new world of possibilities that

where it's much easier to use AI. And what WatsonX does, it brings together tools for businesses to harness that power. So WatsonX.ai is

foundation models that our customers can use. It includes tools that make it easy to run, easy to deploy, easy to experiment. There's a WatsonX.data component, which allows you to sort of organize and access your data. So what we're really trying to do is give our customers

a cohesive set of tools to harness the value of these technologies and at the same time, be able to manage the risks and other things that you have to keep an eye on in an enterprise context.

So we talk about the guests on this show as new creators, by which we mean people who are creatively applying technology in business to drive change. And I'm curious how creativity plays a role in the research that you do.

Honestly, I think the creative aspects of this job, this is what makes this work exciting. I should say, the folks who work at my organization are doing the creating. You're doing the managing so that they can do the creating? I'm helping them be their best. And I still get involved in the weeds of the research as much as I can. But there's something really exciting about

You know, like one of the nice things about doing invention and doing research on AI in industry is it's usually grounded in a real problem that somebody is having. You know, a customer wants to solve this problem. It's losing money or there could be a new opportunity. You identify that problem and then you build something that's never been built before to do that. And I think that's honestly the adrenaline rush thing.

that keeps all of us in this field? How do you do something that nobody else on Earth has done before or tried before? So that kind of creativity. And there is also creativity as well in identifying what those problems are, being able to understand the places

Where the technology is close enough to solving a problem and doing that matchmaking between problems that are now solvable. And in AI, where the field is moving so fast, there's this constantly growing horizon of things that we might be able to solve. So that matchmaking, I think, is also a really interesting creative tool.

So I think that's why it's so much fun. And it's a fun environment we have here, too, because it's people drawing on whiteboards and writing on pages of math. Like in a movie. Like in a movie. Yeah, straight from central casting. Drawing on the window, writing on the window in Sharpie. Absolutely. So let's close with the really long view.

How do you imagine AI and people working together 20 years from now? Yeah, it's really hard to make predictions. The vision that I like, actually, this came from a MIT economist named David Autor, which was imagine AI almost as a natural resource.

you know, we know how natural resources work, right? Like there's an ore we can dig up out of the earth that comes from, you know, kind of springs from the earth. We usually think of that in terms of physical stuff. With AI, you can almost think of it as like there's a new kind of abundance, potentially, 20 years from now, where not only can we

have things we can build or eat or use or burn or whatever. Now we have, you know, this ability to do things and understand things and do intellectual work. And I think we can get to a world where automating things is just seamless, where we're surrounded by capability to augment ourselves to get things done. And you could think of that in terms of like, well, that's going to displace our jobs because eventually the AI system is going to do everything.

we can do. But you could also think of it in terms of like, wow, that's just so much abundance that we now have. And really how we use that abundance is sort of up to us. You know, like when you can, writing software is super easy and fast and anybody can do it. Just think about all the things you can do now. Like think about all the new activities and got all the ways we could use that to enrich our lives. That's where I like to see us in 20 years. You know, we can, we can do just so much more than we were able to do before.

Abundance. Great. Thank you so much for your time. Yeah, it's been a pleasure. Thanks for inviting me. What a far-ranging, deep conversation. I'm mesmerized by the vision David just described, a world where natural conversation between mankind and machine can generate creative solutions to our most complex problems. A world where we view AI not as our replacements, but as a tool for us to be able to do more.

but it's a powerful resource we can tap into and exponentially boost our innovation and productivity. Thanks so much to Dr. David Cox for joining us on Smart Talks. We deeply appreciate him sharing his huge breadth of AI knowledge with us and for explaining the transformative potential of foundation models in a way that even I can understand. We eagerly await his next great breakthrough.

Smart Talks with IBM is produced by Matt Romano, David Jha, Nisha Venkat, and Royston Preserve with Jacob Goldstein. We're edited by Lydia Jean Cott. Our engineers are Jason Gambrell, Sarah Bruguere, and Ben Tolliday. Theme song by Gramascope. Special thanks to Carly Megliore, Andy Kelly, Kathy Callahan, and the 8 Bar and IBM teams, as well as the Pushkin Marketing team.

Smart Talks with IBM is a production of Pushkin Industries and iHeartMedia. To find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts. I'm Malcolm Gladwell. This is a paid advertisement from IBM.