Home
cover of episode What are Diverse Intelligences? with Dr. Pranab Das

What are Diverse Intelligences? with Dr. Pranab Das

2020/8/11
logo of podcast Stories of Impact

Stories of Impact

Chapters

Professor Pranab Das discusses the multifaceted nature of intelligence, emphasizing that it is not a unitary concept but rather diverse and encompassing various forms such as problem-solving, empathy, and emotional intelligence.

Shownotes Transcript

Welcome to Stories of Impact. I'm producer Tavia Gilbert, and in every episode of this podcast, journalist Richard Sergay and I bring you a conversation about the newest scientific research on human flourishing and how those discoveries can be translated into practical tools. This season, we're diving into the question, what are diverse intelligences?

If you've ever wondered if a dolphin is smarter than a chimp, if honeybees can learn even faster than rats do, and if there's any way for us to learn about how aliens might communicate, this season of Conversations is for you.

Throughout the coming weeks, we'll be in conversation with some of the world's most cutting-edge scientists, leading research projects focusing on intelligence, not just human intelligence, but animal and machine intelligence as well. We begin our series speaking with Pranab Das, who has, for 25 years, been a professor of physics at Elon University in North Carolina. Pranab Das

Professor Doss has long been interested in interdisciplinary studies, particularly around the relationship between science and spirituality. And for the last several years, he's been the principal advisor to the Diverse Intelligences Initiative from Templeton World Charity Foundation, which has a three-part mission to

to map the contours of intelligence found in the natural world, to nourish the dimensions of human intelligence, including social, moral, and spiritual intelligence, and to encourage the practical and positive applications of artificial intelligence and machine learning.

Richard will be in conversation with researchers exploring questions around all these intelligences. Let's start at the very beginning. I want to throw out the basic term intelligence. And within a few minutes, although I know we could take the rest of your career to try and define it,

Tell me how you define intelligence. So this is, it's a great question. You had mentioned to me in advance that you would ask this and I have been sleepless because of it. I think the first thing to remember is that our project has that S on the end, diverse intelligence as implicit, I guess explicit in that is the idea that there is no one unitary thing that is intelligence, that there are many, many different instantiations of this thing.

So I begin with the idea that there is a universe and that apprehending that universe in some successful way is a goal of life. So you could say that our job as thinkers, our job as intelligent beings is to come into interaction with the world in a way that does good things.

So the simplest formulation of intelligence, and you'll see this a lot, has to do with problem solving. In fact, there are more than a few people who would claim that all intelligences refer to some kind of problem solving. There lies peril, because if you take that route and you begin to ask what problems are, it's very easy to reduce everything to the kinds of problems that we're good at, or I should say some of us are good at.

And so the most prominent result of that kind of thinking is something called psychometry, psychometry. The psychometry movement was a movement to measure our capacity to solve problems and resulted in the IQ test, among other things. The problem there is you do identify people who are very good at solving particular problems,

And you like that sort of thing if you're the kind of person who likes that sort of thing. In other words, if you like solving math problems, if you like these sorts of social and political and economic and mathematical worlds in which the sort of dominant power sits, those are important to you.

But what's been terrible about that is it has deprivileged empathy, the capacity to get along with each other, which is every bit as important for successfully apprehending the world, especially the world of other people. And it's only really recently that a fellow named Howard Gardner published a book called Multiple Intelligences, and he proposed that there might be six or seven or more intelligences that people have, emotional intelligence, kinetic intelligence, the capacity to move and dance beautifully.

So that's a kind of a jumping off point for the idea that if even humans have a diversity of intelligences within us, the world of animals, the world of AI must have even broader diversities. So just to wrap up, I guess what I would say is that for us, the mission first and foremost is to recognize that apprehending the world comes in many forms and to appreciate as many of those as we can.

So you were very clear about adding the S to intelligence. So it's diverse intelligences. And the reason for that is what exactly? You might say that

There are different problems in the world and that we can solve them using different tools. So that would be one way to imagine an intelligence. For example, the intelligence I mentioned of IQ is very good at solving word association and mathematical and shape association problems. A different way of thinking underlies people who are...

So you had mentioned the word empathy a little bit earlier.

Can we define that under the rubric of intelligence? Is that accurate? That's a great question. There are people who would say being empathetic or being joyful, being happy, having things that we would describe as positive affect might just be helpers modulating our capacity to make good decisions. So maybe there's an underlying sort of decision-making ability and all these other things are just fluid things

modulations, ways of changing or refocusing our attention as we go about solving problems. I think that's probably mistaken. So there's great work, and you've spoken with Brian Hare in one of your episodes from him and other people who have studied bonobos. Bonobos get a bad rap. It's often emphasized that they use bonding through sex as one of their biggest problem-solving strategies.

But the fact is, they have many ways of comforting one another. And because they've built a society based on empathy, based on comfort, I say society loosely, their species culture, if you want, is based on that. They're the only great ape that doesn't murder, which is a really profound observation. Gorillas, orangutans, certainly chimpanzees, humans do. So they've solved a problem, a way of being with other bonobos,

that we haven't solved. That's not decision-making, solving IQ test stuff. That is the practical result of having tools for empathy, a specific kind of intelligence that allows them to be with others in a meaningful and productive way. So tell me about the genesis of the Diverse Intelligences Project at Templeton World Charity Foundation. How did it come about and why did it come about?

That's really exciting. Yeah, it's a great question. So in 2016, the foundation hired a new president, Andrew Sarazin. He pulled together a team, and in that team were folk who were interested in these questions, and they had a variety of different workshops. And from those workshops, emerged an idea of emphasizing the different aspects of diverse intelligences, largely inspired by the writing of the foundation's founder and donor, Sir John Templeton.

He asked, is it possible that there are many diverse intelligences extant in the world that we haven't taken the time to find, to notice, to recognize? So it was kind of a charge from the donor to inquire whether there might be other things that are apprehending the world, the created world, in a useful, meaningful, productive way.

So you have an idea that there are diverse intelligences in three big categories, human, non-human,

and the technology space. So that's one of the challenges of any initiative in science. The fact is that the best laid plans are only starting points. They're just the beginning from which great things grow. So while we had a variety of different initial blueprints, we were very careful not to let ourselves become overly stuck in any particular way of thinking.

By the same token, we didn't want to fall victim to a kind of higgledy-piggledy, oh my goodness, this is amazing, fall prey to our own enthusiasms as new people came on board, as we discovered new researchers. So we used a variety of different mechanisms, the first of which was what we called Champion's Mechanism, and this is something that was invented by Andrew Sarazin, and I think it was really brilliant. He said, look, we don't know everything. In fact, we probably don't know very much. Let's talk to a bunch of people who do.

So we reached out to a couple dozen different impressive scientists and philosophers and said, look, we're interested in finding people who are doing cool stuff. Could you recommend a few possible grantees?

They did, and then a process of peer review and down selection resulted in the first couple of batches of really exciting grantees. While all that was happening, we continued to learn more about the contours of the field. We did a kind of scoping exercise of the various fields involved.

And settled on a few challenge areas. Again, looking under the streetlights, trying to find things that we thought could be productively researched, and acknowledging that there are a lot of areas that we would have to exclude or not concern ourselves with. And so we built out three challenges, and each of those then became an area in which we looked for potential grantees.

Now we're past that point and we're moving to something called synthesis. So having developed a very large cadre of grantees, we're now in the 75 range, I think, of total grants, we've built a community of grantees and other experts and we bring them together annually. And they have begun to synthesize across their different areas of expertise, across questions, across domains of analysis.

And so what we're looking to do now is to produce richer, deeper analyses that are synthetic in nature. So we just completed a round of synthesis grants, and we're now in the process of developing new ideas for another set of syntheses. And these are very exciting. They're going to be what we call frameworks. That means they will be theoretical structures that have a strong empirical, that is, experimental basis,

that can be falsified, that can be tested against real-world experiments, but provide a kind of an overarching story of a set of intelligences, how they relate, how they interconnect, how they're different, how you might get from one to another, what the evolutionary trajectory might be, what the narrative space is that links them all together.

And so we're in the delightful phase now of listening to the experts and hearing what sorts of frameworks they think might meaningfully be applied across many domains in intelligence. So among the 75 grantees that you have funded so far, pick out a couple for me to

to represent what diverse intelligences mean. Give me some live examples and why they're important. So I would say to listeners, go to the Templeton World Charity Foundation's website, poke around there, and you'll see many different amazing researchers doing amazing things. And each of them has something truly exceptional to offer. These are the best of the best. So I will say, as a journalist, one of my favorites is

is the story of Lawrence Doyle and Fred Sharp connecting whale signaling technology and the possibility of one day understanding an alien signal, if that ever happened. Maybe you could think of it as like the android Dr. Dolittle. What would it take to be able to speak across species and eventually to speak with an alien species that didn't originate on Earth?

Those guys are in the process of trying to make the first steps in that direction. So what they suggest is if we are to encounter alien signals, for example, they will come across great distances. They will lack context. They will be pure in some sense, a simple set of codes, words, some kind of clicks, who knows.

It's hard to find anything disembodied like that in our experience of other species, other individuals. Mostly we have a lot of backstory. We have a lot of cues. We have a lot of what's sometimes called metadata. Whom am I speaking to? What do I know about them? What's their body language? So these guys said, what about whales? Whales interact over thousands of miles.

While they may have some recollection of whom they're speaking to based on some idiosyncrasies of the voices, almost no other information is carried along with those packages. That may be an analog to what it would be like to get to talk to aliens. And boy, howdy, if we can't figure out what a whale's saying, and we all come from the same genetic stock and we live on the same planet, we're in big trouble when we try to talk to the hexapods, you know, who come down in their spaceship.

So help me, Pranab, understand in terms of diverse intelligences, what does that tell you about what you're trying to get at, what the foundation's trying to get at, and what its impact could potentially be?

If you take it as possible that the universe has within it tremendous richness, and that that richness isn't necessarily reaching its peak in present-day humans, then you have to also grant that that richness may have reached a more interesting, fulfilling level extraterrestrially, by the same token.

the richness that exists on the Earth has not been fully plumbed. Here's an ideal opportunity to contextualize two different undertakings, both of which take it as axiomatic that the world consists of others beyond humans. If we can learn more about otherness, communication with otherness, then we're really coming closer to a better understanding of the created world, the world that exists beyond ourselves.

Which brings me to another terrific example, which is Professor Andrew Barron's work on the honeybee brain. So he's studying the honeybee brain as a way of potentially understanding the human brain.

That's right. So Andrew is an exceptional scientist. He has a really, really deep knowledge of the ways of bees, but he also is a polymath, extremely skilled in a number of areas, including the theories of cognition and evolution. So what he is interested in that particular area of his work is to suggest that

An exquisitely simple, or at least very relatively simple, organism like the honeybee must get all of its behavioral complexity from what seems to us to be just a tiny, tiny little brain. It's not easy to study. It's still hundreds of thousands of neurons, but it's easier. So what he'd like to do, what he is doing basically, is creating a connectivity map

of the brain of a bee. And from that, he hopes to ascertain what's sort of the minimal structure necessary in order to get rich, complex, surprisingly human-like behavior. And here's that researcher himself, Andrew Barron, who we'll get to know in the full episode featuring his exploration of the honeybee brain. So if we can model the bee brain, we can take insights from those models

and translate them directly into technological applications. If we can model the bee brain, all of this intelligence, all of this dynamic autonomous behavior that we get out of bees, we should be able to capture that in the model. There'll be things that we can learn from that that we could translate into robotics. If we can do that, if he can do that, then he knows a lot about the modules that might be present in humans that we then deploy to do even more

complicated things like having language and building supple societies instead of being locked into the kind of social dynamics that honeybees are. So you're actually telling me that understanding a, quote, simple brain like the honeybee, scientists could help unlock a much more complex thing like the human brain.

I think unlock is a tricky word. There is a group, for example, in Seattle, works on the human brain, and they would say you have to have the whole thing to really unlock what's going on in a human brain. I think what you can certainly say with reasonable comfort is that there are aspects of human behavior that seem to be paralleled in something like a honeybee.

because we all come from a very common genetic and developmental lineage, it would be surprising if there weren't some kind of lesson to be learned.

Help me understand in the diverse intelligences world, how AI is impacting the world around us. People must be the arbiters of morality. There is no morality outside of humans. It's something that we have come to together, that we continue to struggle with. So by no means are they suggesting that we could program a machine to give us morality or to...

teach us how to be better morally. But what they suggest is that we could teach ourselves to be better if we gave ourselves new tools. So they have a variety of approaches, one of which is to suggest that we poll a lot of people and say, "Gosh, what's the right decision?" You mentioned organ transplant and that's what their grant was about. You poll a bunch of people and say, "Who should receive this organ? Under what circumstances?"

No one probably is completely right or unbiased, and you have to be very careful about whom you're asking, but over the collective,

some kind of zeitgeist, some kind of moral compass emerges. That's what morality is in a community. We come to an agreement about what's the right thing to do. Unfortunately, it's extremely hard for humans to take on board what hundreds or thousands of other humans are thinking. It's just we're not built for that. We're built for dyadic interactions, for small group interactions.

So if I really want to know what thousands of people are thinking, I fall back on something simple like opinion polls or political predictions. That's not a particularly rich way to ascertain what the right thing is, what the good thing is, what the deep moral thing is. Machines, on the other hand, AI, on the other hand, is exquisitely suited to extracting patterns from large data sets. So if you give a machine tremendous access to human morality...

the machine could extract in some very cryptic, sort of mysterious way, could extract the essence of the contours of the collective moral mind.

The team at Duke would not suggest you then simply allow the AI to act based on its analysis of the contours of our morality, but instead it can be our interlocutor. So say you come up to a decision and you feel pretty good about it, as maybe a committee at a hospital, but then you could ask the AI what it thought the world at large would have done. And nine-tenths of the time it's going to say pretty much the same thing you came up with.

But if 10% of the time you and its understanding of the whole world or our whole community's sense of morality conflict,

then you can have a much deeper conversation. You go back to your committee, you ask in new experts, you ponder, you ask religious figures, where did we go wrong? Is the machine misunderstanding people? Were there a bunch of people who made bad decisions and told them to the computer? Or is it a blind spot of our own? So you could use technology, in other words, to help us help ourselves, not asking to outsource our moral minds, not outsourcing our intelligences,

but enriching them through a self-learning process. One of the interesting aspects of that particular project, which we will dive into in a later episode,

is the definition of morality and whether morality might change, for example, just dependent on geography. Are those who live in California different from those who live in Georgia? And if so, could a machine that is imbued with whatever we call morality

have a different impact on that decision-making in California for the transplant versus one in Georgia. This is one of the biggest problems associated with machine learning in general. If you rely on humans, humans have biases, humans have blind spots, humans have culture, humans have regional differences.

And unless you're very self-aware about how you build your data set, you can end up building terrible mistaken biases into the way the machine apprehends what humans want. Trying to teach machines to know what we want, it's probably the hardest undertaking presently being seriously worked on in AI. It's a little bit like the old thing, we want to make a machine that gives a damn.

If you could get a machine to understand what it is we want, that's the first step to making sure that its actions are congruent with our ambitions and our hopes for the future.

Let's hear from another of the researchers we'll meet later this season, Jenna Scheich-Borg, whose work focuses on AI and morality. I think one of the biggest contributions is not just in how to imbue a morality in a machine, but in the second step. How do you use that information to impact our moral judgment? I think...

the past 15 years that I've been trying to understand how we make our own decisions is really impacting how we think about how should the information that comes from the AI be presented to humans to actually influence their behavior in a way that will be effective and useful. And that's really, in some part, one part of my life, I think about things in a way of what would I tickle in the brain to change a moral judgment.

Back to the conversation with Richard and Professor Pranab Das. Do you ever worry about, as a research scientist, that technologists could potentially build a HAL? So one of the decisions we made early on was not to dive into the world of AI. It's very highly resourced. So despite the fact that this foundation has substantial resources allocated to the initiative, they're dwarfed.

by the budgets of the Googles and the Facebooks and the Apples of the world. While we work closely with a number of artificial intelligence scholars, we don't invest in folk working on questions like that. The way that's generally framed is they're going to be something called artificial general intelligence. That is, intelligences that can come to new problems unexpectedly and learn how to solve them in the same way that we do.

If you're asking me personally, I don't think that it's likely anytime soon for a variety of reasons. The most salient, I think, is that we build AIs around the idea of goals. Those goals are not entirely static, but are certainly more rigid than the goal structures of humans. I think we often confuse ourselves by imagining that we're always acting in accordance with some rational or well-defined goal set. In fact, our goals are always changing.

and that's a ferment that is chemical in nature in many cases. Our endocrine system helps retune our goals, our attention span changes, our visual focus changes depending on how stressed we are. Our wants and needs vary with our hunger and other appetites. The human being doesn't live in a mathematicalized system of goals. And so until AI can develop that sort of suppleness

it seems to be unlikely that the things that come out of artificial intelligence will look very much like those that come out of biological intelligences. And that's a good thing. The fact is that AIs are fantastic at a bunch of stuff we aren't, and we're really good at a bunch of stuff they aren't. What a better, richer world it is when you have experts, capacities that coexist and mutually reinforce. What a drab world it is when we all think the same way.

I'm curious why this push toward cross-disciplinary work is now so important to Templeton. What does it say about the future of science? Vanessa Woods and Brian Hare are rock stars. They are incredible scientists, incredible people, and have contributed greatly both in science and to the public discourse. Before I go any further, I have to give a shout out to their newest book. It just dropped on July 14th.

It's called survival of the friendliest. And it's an interrogation of what makes humans so successful. They argue that it isn't our tool use, our language, those are important, but it's our capacity to be friendly, to have productive, enriching social interactions with one another and in the case of dogs with other animals.

Not only is this great science really a new novel paradigm in scientific thought, but it also has tremendous implications for human flourishing. So one of the things that matters the most to the Templeton World Charity Foundation is betterment, is humans doing well, being good in a kind of classical capital G sense of

If the work that goes on between disciplines can advance the study, the understanding of what makes certain kinds of intelligences successful, can help us magnify those in humans,

we stand a real chance of improving human flourishing. And I think that there's really no better example of scientists who have taken, so as I said, they worked with bonobos. You mentioned their work with dogs. They wrote a New York Times bestseller called The Genius of Dogs. They're taking what they've learned from the capacities, natures, and evolution of dogs and bonobos and

Inflecting those through rich, careful science, inviting philosophers, inviting comparative psychologists, inviting ethologists all in on a conversation about how do these things then meet and create a bigger, richer framework under which we might understand humans as well. Here's one of those research rock stars, Brian Hare, talking about his study of diverse species.

Many of the differences we see between wolves and dogs, we see between bonobos and chimpanzees. And this is a perfect example of what our project would be all about, is trying to understand why is that?

Why is it that you have these two distantly related pair of species that have become so similar in the way that they've changed from one another? What was the process that drove it? We think the same evolutionary force has shaped dogs from wolves and shaped bonobos from a chimpanzee-like ancestor. And we think that force is selection for friendliness.

So what a brilliant undertaking. So the project you're referring to in particular is ambitious. It seeks to create a kind of common platform across which researchers in a variety of different systems, a variety of different species

can interrelate, they can put their data together in ways that are meaningfully comparable. That's a big undertaking. It was very successful and now I think it will serve as a tool going forward for researchers in several different species and domains of analysis to compare their work with one another. Is anyone else in the foundation world doing this sort of work?

Yes and no. Foundations are interesting in their idiosyncrasies. There are certainly many foundations who are doing rich work in the artificial intelligence space. The most, I think, comparable explorations of that have to do with the morality of artificial intelligence and its use.

the applications and misapplications of AI. Interestingly, that has become a field in and of itself and it means something a little different from what we think of when we use these words.

There's a field emerging called the ethics of AI. In most cases, that means how do we ethically use AI? How do we ethically develop AI? What are the outcomes of our artificial intelligences? That's really quite different from something like ethics and AI's impact on our development of our enrichment of our own ethics.

So what I would say is the Templeton World Charity Foundation is unique in the world of foundations for its willingness to boldly, I think, state that there is such a thing as the good, that human intelligences are powerful ways of apprehending the world so we can be better.

humbly positing that there are other ways to be good, that there are other things that we can learn, both from animals and from our own creations, our own constructs, and from ourselves, and that each of those things will contribute to a human flourishing and a development forward in

in a way that is really very exciting. It makes me very proud to be associated with that group. The initiative as a whole, what do you hope when you look back on it, what its impact will be on the world?

So we have a couple of ideas and these are still formative. It is to be hoped that an interdisciplinary community will have formed that is robust, can clearly state theoretical and empirical ideas that have implications for areas that have not yet been plumbed. So one of the most successful sciences, physics, I'm a physicist, so I'll always see things through that lens, is

has made many assertions over the centuries about things that have not yet been studied, or the theories of physics have implications for things that have not yet been studied. And when those studies take place, they can be tested against, or I should say the theory can be tested against the results of those studies. That is not as often the case in some other fields. That's largely because biology, psychology, philosophy work in such big spaces that

with lots of moving parts that don't necessarily make them easily interoperable. If we've done a good job by the end of all of this, studies of intelligences will be able to hang different results on different theoretical frameworks and to challenge those theoretical frameworks in a way that should make the frameworks better, more robust, and more predictive going forward. So if we've done our job, this community will have output a set of frameworks

that subsequent researchers will be stimulated by, taken with, and find in them ways of cross-fertilizing and cross-communicating their work, thereby building, strengthening, and making more durable the frameworks themselves. That's part and parcel of a strategy to try to keep the thing going when the resource faucet opens.

is turned off. It's one of the great sadnesses of many funders that they can induce excitement, they can induce research by a flow of resources, but when those resources shift,

Many researchers then seek other topics or other areas of study. So it's our hope that we can help our researchers find other sources of funding, help other funders get excited about this sort of thing, and importantly, keep the community itself alive. So resources notwithstanding, they still feel that there are productive things that they have to speak about with each other.

I'm curious, along this journey that you have helped Templeton lead, what have been some of the biggest surprises in terms of your learning of what the term diverse intelligences means? I'm always surprised by the depths of my own ignorance. I think we as a team arrived with a lot of excitement, a lot of enthusiasm, and

and some expertise. But as we meet each of these brilliant researchers, we uncover how much there is yet to know. There's a great quotation again from the founder of the foundation, the donor, Sir John Templeton, which is how little we know and how eager to learn. So while one can come at something like this with a crude sense of, I think I know what intelligences are or

We have a sense of what the blueprint should be. The biggest and most happy surprise is how much else there is. And every conversation elucidates that depth of excitement of how much there is yet to learn. And how little we know. How little we know.

Pranab, best of luck as you continue this amazing project. Thank you, Richard. It's been a pleasure. We're excited to dive deeper into the exploration of diverse intelligences in our next episode, when we return with a full conversation with Lawrence Doyle and Fred Sharp.

And their exploration into the research about what humpback whale communication can tell us about the potential for interstellar conversation with alien intelligence. Well, the sounds in the ocean in some ways make amazing interstellar analogs. Oceans have this amazing acoustical conductivity, so they can probably be communicating with each other fairly efficiently. They've essentially had the ocean internet for millions of years.

We look forward to bringing you more of that conversation next week. In the meantime, we hope you enjoyed today's story of impact and that you're looking forward to hearing more about honeybees, dogs, AI, and more. If you liked this episode, we'd be grateful if you would take a moment to subscribe, rate, and review us on Apple Podcasts. Your support helps us reach new audiences.

And for more stories and videos, please visit storiesofimpact.org.

This has been the Stories of Impact podcast with Richard Sergay and Tavia Gilbert. This episode produced by TalkBox and Tavia Gilbert. Assistant producer, Katie Flood. Music by Alexander Filippiak. Mix and master by Kayla Elrod. Executive producer, Michelle Cobb. The Stories of Impact podcast is generously supported by Templeton World Charity Foundation.