cover of episode Does AI dream?

Does AI dream?

2024/11/30
logo of podcast To The Best Of Our Knowledge

To The Best Of Our Knowledge

People
M
Meghan O’Gieblyn
S
Sougwen 愫君 Chung
W
Walter Scheirer
Topics
Walter Scheirer认为,网络上的虚假信息并非完全由深度伪造视频主导,大部分虚假信息以模因形式存在,并常常带有幽默成分,这使得其传播和影响更为复杂。他指出深度伪造技术对政治影响有限,但在色情领域造成的个人伤害和伦理问题不容忽视。他认为互联网的去中心化,让业余爱好者能够控制基础设施,可能更有利于网络健康发展。他还探讨了互联网社区的构建以及AI的创造都与人类的共同创造和精神维度相关联,互联网上的模因是神话循环的现代体现,它反映了人类持续的创造力和想象力。

Deep Dive

Key Insights

Why are deepfakes a significant concern for individuals, particularly women, despite having little impact on politics?

Deepfakes are a significant concern for individuals, especially women, because they are often used to create and disseminate non-consensual pornography, leading to humiliation, discrediting, and destruction of lives. This is a human rights issue that affects real people, yet it receives less media attention compared to political concerns.

Why does Walter Scheirer believe decentralizing the internet could be a positive step forward?

Walter Scheirer believes decentralizing the internet could be a positive step because it would allow more localized control over the infrastructure, similar to the bulletin board systems of the 1980s and 1990s. This model could foster healthier communities and reduce the global reach and influence of large social media platforms.

Why did Megan O'Giblin explore automatic writing and hypnosis during her writer's block?

Megan O'Giblin explored automatic writing and hypnosis to tap into her unconscious and bypass her overthinking, which was hindering her creativity. She was curious about producing language without conscious control and seeking external meaning or guidance in her life.

Why do some tech leaders like Elon Musk and Sam Altman aim to create artificial general intelligence (AGI)?

Tech leaders like Elon Musk and Sam Altman aim to create AGI because they are influenced by the idea of the singularity, where AI surpasses human intelligence and can solve complex problems like climate change and disease. They see AGI as a tool to create a new kind of intelligence that could provide answers and solutions that are difficult to achieve through human thought alone.

Why does Su-Gwen Chung treat AI and robots as collaborators rather than tools in her art?

Su-Gwen Chung treats AI and robots as collaborators rather than tools because she believes in the creative potential of human-machine interaction. Her art aims to open dialogues about the dynamics between humans and machines, and she finds value in the error states and unpredictability that arise from these collaborations.

Chapters
The current internet's design is discussed, focusing on its global reach and the resulting issues of trust and misinformation. A scholar's suggestion for decentralization and a return to localized networks like bulletin board systems of the 80s and 90s is explored as a potential solution.
  • The current internet's centralized design contributes to the spread of misinformation and deepfakes.
  • Decentralization, returning to localized networks, is proposed as a solution.
  • This approach would empower amateurs and improve the internet's health.

Shownotes Transcript

Translations:
中文

Hey friends, it's Anne. Conversations about technology seem to veer between hype and paranoia these days. Generative AI programs are doing amazing things. They're writing term papers, making art, diagnosing disease. But their power is also kind of freakish and scary. I mean, how long before AI steals our jobs or kills us off?

Well, this hour, we're going to talk with some folks who are not scared of that digital future and who are asking big existential questions about AI and about the nature of its intelligence. This hour, does AI dream? Keep listening. From WPR.

This episode is brought to you by Progressive Insurance. You chose to hit play on this podcast today. Smart choice. Make another smart choice with AutoQuote Explorer to compare rates from multiple car insurance companies all at once. Try it at Progressive.com. Progressive Casualty Insurance Company and affiliates. Not available in all states or situations. Prices vary based on how you buy.

Still searching for a great candidate for your company? Don't search, just match with Indeed. Indeed is your matching and hiring platform with over 350 million global monthly visitors according to Indeed data and a matching engine that helps you find quality candidates fast. And listeners of this show will get a $75 sponsored job credit to get your jobs more visibility at indeed.com slash the best. Terms and conditions apply. Need

Need to hire? You need Indeed. It's to the best of our knowledge, I'm Anne Strange Champs. Walter Schacher was a teenage hacker. One of those kids who found his way to the shadow side of the internet and fell in love with its weird culture of fact and myth. This is like, I guess, when I was in middle school. Some friend of mine found some site on the early internet, these text files. And he said,

Creative writing produced by hackers. Some of it was technical, some of it was purely fictional and really interesting. It was like very underground, cheat codes, video games and things of that nature. But then it would dive deeper where it's like, have you ever considered how your operating systems work? You ever heard of this Unix operating system? This is what businesses in the government use, right? Here's how to use it. Here's how to access it. So then you get into the creative writing aspect.

It's like, by the way, I was on this UNIX system that I accessed through the federal government in the United States, right? Seeing interesting things I was not supposed to see.

UFOs, paranormal activity, stuff that couldn't possibly be true, but it was intriguing enough where you wanted to keep reading. There's a very famous hacker group, the Cult of the Dead Cow, that loved telling reporters that they had this... I love the name. Right? Like, special way to move satellites in orbit. Not technically feasible, but it seemed plausible enough.

The hacking world was not very large, and so it was not that difficult to find the quote-unquote "elite" hackers. But of course they were like the elite. So I was kind of like the bottom feeder, the neophyte. The kid brother. Exactly. So hackers were really upset as they started to gain some notoriety with their portrayal in the press.

Quinton is a hacker, a computer genius who illegally breaks into computers for fun. Hackers basically convince Dateline NBC that they have recovered evidence of UFOs from government military facilities. Mm-hmm. So the episode features this hacker who appears...

in this anonymous fashion. His voice is altered, his face hidden. So you can't, you know, make out his appearance. That's the only way Quinton would agree to talk to us. So does he have one of those deep, gravelly voices? Yeah, exactly, exactly. I have access, you name it, government installations, military installations. He's at his computer, and these text files are flashing by. You know, catalog of UFO parts list, all this, like, crazy stuff. Ha ha ha ha!

What the hackers end up doing is going through this culture building exercise on their end, hoping that people who are thinking critically about this new story realize, wait a minute, why isn't this the emphasis of the episode, right? Did you take part in any famous exploits or are there things that you did that you will not reveal on the air? I wish, I wish. Yeah. No comment. No comment. Okay. So the world may never know exactly what Walter Shirer did as a teenage hacker, but

But today he's a computer scientist at Notre Dame, where he works on media forensics, meaning he detects fake things on the internet for a living. That's also the title of his new book, A History of Fake Things on the Internet. So fear of deception and misinformation seems at an all-time high right now. And for good reason. Generative AI is ushering in a whole new world of utterly realistic fakes. I don't think we're ready for it.

Which is why I wanted to talk with Walter Schirer. He is a realist about the digital future, and what he sees isn't the end of truth, but the next iteration of the human imagination. Case in point, a couple of years ago, he said his students to scour the web for examples of fake content, and they found plenty. Just not what they expected.

What we started to notice is that, yeah, there's a ton of manipulation out there, but it's almost always in meme format. So images that are not secretly edited to fool you, images that are obviously edited.

And they're usually humorous, right? There's some sort of joke embedded in the content. But in some cases, the meme is suggesting, right, some alarming things. We have a large collection of anti-vaccine memes, some pretty humorous memes. Like one in particular I have in mind, it's some like babies and their eyes have been edited with Photoshop. So they're like huge and they're kind of scared, like, oh, I'm going to get...

vaccinated. But then if you keep going down the rabbit hole, right, the messaging gets more and more insidious. You start to sort of question, right, if you get deep into this, maybe there is something to these jokes, right? Maybe I should sort of scrutinize this matter more.

And that's a tricky thing when you think about humor on the internet or humor just in general. In some cases, parody and satire are really important. In other cases, you have to be a bit more critical when you're looking at what the message is. It's incredibly complicated to draw the line. What is harmless and what is dangerous and deceptive?

Have you arrived at any clarity around where does that line lie? Yeah, that's a great question. So one thing I came to appreciate more actually when writing this book is that parody and satire are really important. This is a very old sort of format of making a social critique and is often used very strategically.

So in the book, I point out a really famous case which predates the internet. Jonathan Swift's famous pamphlet, A Modest Proposal. It's about cannibalism. It's about eating babies. It's really aggressive. It's really disturbing. But Swift isn't really talking about cannibalism. What he's trying to do is make a social critique about the state of the poor in Ireland.

But over the years, this pamphlet has been routinely misunderstood, even up to the present day. People lose their mind. It's like, how in the world could we have people out there advocating for the consumption of babies? Sort of missing the point. And a lot of the internet is just like that. There's all this transgressive material. You have to think a bit, right, to get the message. And if you're not, that's where it gets dangerous. And of course, the feed-like nature of social media causes us to kind of just stick to the surface level message. Right.

Let's talk about deep fakes. When people talk about being really afraid of fake things on the internet and about a tsunami of fake content, is that overblown?

Yeah, so this is a question that's been coming up recently. So deep fakes now have been around a little while. They first appeared on the internet in 2017. Huge concern right away that you would have videos appear in a political context that changed the course of an election or lead to political violence, really bad sort of doomsday like scenarios.

But none of that has transpired. As far as anybody can tell, deepfakes have had really no impact on politics whatsoever. Where they do really concern me, where I am really sort of wary of them,

When they target individuals, for instance, there's been a rise in deepfake pornography. The tool is being used now basically to humiliate women. And that is completely unacceptable. So revenge porn, you're talking about where somebody photoshops the girlfriend who dumped them. Or wasn't there a recent case of...

A journalist in India, she wrote something critical of the government. Yeah, exactly. She was a prominent politician, and she was targeted with this deepfake pornography to basically discredit her. Which I think she has said kind of destroyed her life. Exactly, exactly. Another case, too, it's been coming up. There's this really perverse genre of pornography. It's like fantasy porn. I want to see this actress in this pornographic scene. So you use a deepfake to construct this.

This is a huge problem. I feel like this is like a human rights issue, fundamentally. You're seeing children do this now, right? There have been several cases in like middle schools, high schools where this has come up. But what worries me, a lot of the conversation is around those sensational things. And we often lose sight of the real problems that are affecting real people.

The deep fakes in the pornography realm haven't received the same treatment in the news that all the political stuff has. And I think that's a real problem. I guess the other thing that people seem to worry about an awful lot is that because of the global reach of the internet, the overall impact of deep fakes is to create a culture in which there's just a breakdown of trust.

And the feeling that how can we count on anything to be real and that that's ultimately is going to wind up sabotaging democracy. Yeah, so I think this is a pretty interesting insight that relates to the design of the current internet. So a lot of my time recently has been thinking about alternatives to this. If you look back to the early days of computer networking, it wasn't like this.

You would have more localized computer networks. You had these things called bulletin board systems, right? So somebody in a local community would put their computer on the phone network and people would dial into it.

Usually, again, it was like a special topic board. So people interested in something like knitting, right? There would be like a knitting bulletin board, like whatever your favorite hobby or interest was, there was something out there. And so you didn't have these global services where with a click of a button, I'm reaching basically everybody who is subscribing to the service who's online. And it sort of worked a lot better.

There's a scholar of media theory at the University of Virginia, Kevin Driscoll, who wrote a fantastic book called The Modem World. And he makes this case that maybe decentralizing things on the Internet such that amateurs have some control over the infrastructure like we did back in the 80s and 90s would be

really good and healthy. And I really agree with that. I think this is a fantastic idea. I also think it's not that expensive to do this. I think it's just we got to get people to sort of unplug from these giant social media services and go back to this older model. And again, many people are familiar with this because they've been online as long as I have since like the early 1990s.

The way you wrote about the days of the hackers, there was definitely a feeling of, you know, the internet wants to be free. And I felt like that's where you were coming from. And then, which I couldn't quite put together with knowing that here you are teaching at Notre Dame. And I know that Catholic social thought has informed the way you think about the internet. And somehow I was having a hard time putting these two things together.

Former hacker, Catholic social thought. Can you explain? Yeah, so I think behind this whole book project is really the idea of community and how do communities form.

Like the hackers I just mentioned, right? This is a really interesting subculture. You have a bunch of people who are connecting for the first time and building something that endures. Hackers go on to create the computer security industry. Like they do extraordinary things. Some of them go into politics, like Beto O'Rourke famously was a member of the cult of the dead cow. How do you build a community like that?

It seems to me like the internet has huge potential for bringing people together. And of course, if you turn to Catholic social teaching, it's really about how do we sort of flourish in a communal sense? How do you build some notion of the common good?

And if you turn back the clock and look at the ideas that formed the construction of the Internet we have today, a lot of that comes from Marshall McLuhan, the famed media theorist. He's very much associated with the counterculture in the 1960s. One fact that a lot of people don't know, he was a devout Catholic.

I did not know that. He converted to Catholicism, and he believed that the Catholic faith was the ultimate media system because you're always in communion with the saints, right? Huh. People who have passed.

and of course with God. All of these things come together, right, through prayer, meditation, these forms of spirituality. And it's interesting to see those ideas then sort of trickle down into his thinking about the media. He was obsessed with this idea of uniting the entire planet through information networks. There's also a little element almost of transubstantiation, isn't there? Didn't he write about, in the end, we will become information? Yeah, yeah, yeah, yeah, yeah. That's true. Exactly, right?

Technology is kind of mysterious when you think about it. I don't think that's entirely a crazy idea. A lot of people are talking about emerging technologies in this way, too. Think about AI, right? Is there some spiritual dimension to it? And again, I think there's something to that. And of course, all these things, when we think about technology, are human creation. In Christianity, we're called to co-create with God. He gave us this facility before.

to create. And so that I think is sort of like a very powerful message behind the scenes with a lot of this stuff. Say more about AI and the spiritual component. That's nothing I have imagined. Mostly, all you hear about is AI is going to eat humans. Yeah, exactly. We have these like doomsday notions about AI. Now again, as a practicing computer scientist, I have a more realistic, grounded view on AI and its limits.

It's not going to destroy the world anytime soon. I'll reassure all of the listeners out there that we have more pressing issues here on planet Earth to deal with. That said, like, then you have another community that's like, you know, we're going to create this super intelligence and then worship it. That's not a great idea. And also, again, sort of...

You know, is the super intelligence even possible? Is this where the technology is heading? Again, from the realistic vantage point of a computer scientist, no. Then there's this really interesting third perspective, which again comes back to this idea of technology and creation, right? It's like, where does AI come from? Well, it comes from all of us.

The most powerful AI systems, ChatGPT, Midjourney, DALI, these things are trained on the data that we generate and then ship to the internet. It's sort of like a reflection of all of us, the human community. And that's actually reassuring, right? I mean, yes, it's going to have some flaws because humanity is flawed, but this is kind of neat. We've all had input into these systems.

Well, there's this concept that you bring up that I find really fascinating, the myth cycle. So this was a concept the great anthropologist Claude Lévi-Strauss came up with, this idea that we live in kind of two different realms, the real world, the realm of truth, and this myth cycle, which is...

Well, what? Yeah, so Levi Strauss had a really keen insight in that the imagination is really useful for human survival, interaction with others, problem solving. And it's frequently discounted once you get in the sort of like 19th, 20th centuries. But he's arguing that people are always thinking beyond their immediate circumstances. Why do they do that? Why do they tell these stories in general?

If you think about it, if you're like a perfectly rational person, you want to optimize every aspect of your life. Why would you waste time telling stories? Why would you waste time making things up? That's not an efficiency. Mm-hmm.

But Levi Strauss is arguing, but if you're sort of thinking beyond your immediate circumstances, you can do way more than if you're constrained with just this factual knowledge in the observable world you're in. Does that make sense, right? It's almost like a shocking thing to say in like the 21st century. Wait, I can just daydream and that's going to help me? Well, absolutely.

Well, it begins to make a little more sense of our drive to create this virtual world because it can be tempting to say, oh my God, why are we wasting our time trying to simulate everything that already exists, build a crappy copy online?

It seems to me that your point with Levi Strauss is, no, my God, online is where this enormous human drive to create myth lives. And rather than the enchantment of the world having disappeared, maybe it's kind of sneaking its way back in, in this wild and co-ate frontier that we've created. And all these little memes actually may be more significant than we think.

Am I getting this about right? Yeah, absolutely. I think you put it better than I did in the book. That's an excellent summary. Yeah, I firmly believe that. Again, it's not terribly surprising when you look at culture through the centuries. So much of culture is filtered through some sort of myth cycle. Memes are just, again, sort of like a rapid fire moving the myths around, which I think is really neat.

There's such a strong human desire to do this. We're further developing really innovative technologies to tell stories, right? Surprise, surprise, this time goes on. I think that's like largely misunderstood. It's like, what is the internet for? Again, I think a lot of people would still say, what's the information superhighway? You go there to get facts to get your work done.

and that's what it's for, right? You still hear this corporate messaging from the dot-com era of the 90s, but it was never really meant to be that, right? It was really McLuhan's vision of this creative space where, again, we're going to share projections of our imaginations. Levi Strauss would sort of be smiling about all this. You know, this is the natural progression of the myth cycle. That's Walter Schirer, a computer scientist at Notre Dame and author of A History of Fake Things on the Internet.

Coming up, writer Megan O'Giblin goes looking for the soul in the machine. I'm Anne Strainchamps. It's To the Best of Our Knowledge from Wisconsin Public Radio and PRX. Some of the most interesting questions about artificial intelligence are existential, not in the sense of, will AI destroy the world, but more like, what can it teach us about ourselves, about the deep nature of our own human minds?

What's the difference between machine and human intelligence? Well, Megan O'Giblin is one of the best writers I know on this subject. In her book, God, Human, Animal, Machine, thinking about the nature of machine intelligence leads her to questions about her own creativity, about the unconscious, and our human longing for transcendence. Before we get started here, just for kind of scene setting, is there... Is there anything to mention?

Scene setting? I know. I racked my brain. And because Megan lives in Madison, Wisconsin, just a few miles from our studio, Steve Paulson stopped by her house to get a sense of how she's thinking about AI now. I did find my old notebooks that I did automatic writing in. Oh. It's not really, I don't know how. Okay, that's cool, actually. Huh.

That's right. Megan has a whole stash of notebooks filled with her automatic writing. The kind of stream of consciousness prose the surrealists churned out a hundred years ago when they were trying to tap directly into their unconscious. And one more thing. This all came out in a series of sessions under hypnosis. The hypnotist was very insistent that I write fast. So I was in his office lying down on a recliner.

And, you know, he led me through this whole visualization exercise. I had to stare at the ceiling, keeping my eyes open without blinking for a certain amount of time. There was also some like bells and gongs involved. And then he said, okay, you can pick up your computer now and start typing. He said, the only rule is that you can't

Stop typing. And were you actually in a hypnotic state at that point? I was in a weird state. I don't know if I was under full hypnosis. I did this several times with him and I'm still not really sure I'm fully hypnotizable. But then I also did these little exercises. So like I would, I did, and this is something that the surrealist used to do too. Like when you first wake up in the morning, your brain is still loose, very associative in that sort of dream state. Um,

just grab a notebook and start writing without thinking. And so this was the notebook I used to do that. And I can barely even make out my writing because it was so, I was trying to do it very quickly. Yeah. Would you be willing to read anything from your book? I'll let you choose if there's anything there. So I don't know if you've read any of the Surrealist texts. So it's weird because a lot of what I wrote sounds very similar. It's just really free association. Yeah.

So this one I think is mostly legible. I could try to read some of it. Sure, yeah. Okay. And all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils on the side of the road glaring with their faces undone. And all those trips back and forth when the sun was so high and naked in the sky, we thought that it might drown us. Maybe there is salvation in the soft touch, the lone sound, the medals and the trophies of our former age.

But when it comes to that, we will need the thunder and the solitary guidance of some greater force. Wow. So why did you want to do this? Why did you go to a hypnotist and why did you want to try automatic writing? Well, so I was going through a period of writer's block, which I had never really experienced before. It was during the pandemic and I was working on a book about technology and A.I.,

GPT-3 was just released to resourcers.

And I was reading this algorithmic output, synthetic text, that was just so wildly creative and poetic. These models could basically do a sonnet in the style of Shakespeare or write just very, very sort of dreamlike, surreal texts, short stories, poems. So you wanted to see if you could do this yourself, not using an AI model, but yourself. Well, I became really...

curious about this idea of what does it mean to produce language without consciousness? And for me, as somebody who at this point in my life, I was really overthinking everything in the writing process and my own critical faculty was getting in the way of my creativity.

It seemed really appealing to think about what would it be like to just write without overthinking everything. I think I just got really curious about the unconscious and especially its role in creativity. Like a lot of writers, I've often felt when I'm writing that I'm in contact with something larger than my conscious mind that...

I'm being led somewhere by the piece or that the piece that I'm writing is somehow more intelligent than I am. And you hear artists talk about this all the time. They feel like they're not really creating so much as they're uncovering something. They sort of become a conduit to some larger consciousness, maybe. I think ultimately what I was looking for was some sort of external meaning or guidance or some sort of

I don't know. I wanted some sort of guidance on questions in my life. Well, let me ask you about that because there's some things that are really interesting about what you're saying, given that you're sort of searching for this meaning outside yourself because...

You have a rather unusual background for someone who's known mainly as a writer about technology and AI. Not only do you come out of a fiction writing background, which I don't think you do anymore, but I could be wrong. You also grew up in a very religious family. You grew up as a Christian fundamentalist, right? Yeah, yeah. My parents were evangelical Christians. And actually, everybody I knew growing up believed what we did, basically. My whole extended family are born-again Christians.

I was homeschooled along with all my siblings growing up, so most of our social life revolved around church. Yeah, when I was 18, I went to Moody Bible Institute in Chicago to study theology and was planning to go into full-time ministry. So there definitely was meaning out there in the world for you, I assume, during that whole time. I mean, the whole point was everything was infused with meaning and God. Yes. Our lives had a definite purpose. Our purpose as human beings on this earth was very clear. Yeah.

But you left the faith, right? Yeah, I had a faith crisis when I was two years into Bible school. I started to, I mean, I have been having doubts for a while about the validity of the Bible and the Christian God. I dropped out of Bible school after two years and pretty much left the faith. I think I began identifying as agnostic almost right away. And is that how you identify now? Yeah, confused, I guess. I don't know.

Agnostic is probably the best term. Yeah. One thing that's so fascinating is, so you lost sort of that very hardcore Christian faith, but my sense is you're still extremely interested in questions of transcendence, the spiritual life. That stuff matters to you.

Yeah, absolutely. Yeah. I think anyone who grew up in that world doesn't ever, and a lot of people do leave that world. And I don't think anyone ever totally leaves it behind. And my interest in technology, I think, grew out of a lot of those larger questions about, yeah, what does it mean to be human? What does it mean to have a soul? You know, all these things that were very certain when I was growing up.

A few years after she left Bible school, Megan read Ray Kurzweil's The Age of Spiritual Machines, the book that gave transhumanism a kind of cultural buzz. It was a utopian vision of the future where we would download our consciousness into machines and evolve into a new species. And it was this incredible vision of transcendence, this idea that we were going to

our intelligence, our physical capacities. We were going to essentially become immortal and be able to live forever. So there's some similarities to your Christian upbringing. Yeah, as somebody who was just at the age of, what, 25, starting to accept that I wasn't going to live forever in heaven, that I wasn't going to have this glorious existence after death.

It was incredibly appealing to think that maybe science and technology could bring about a similar transformation. Megan threw herself into this transhumanist world, but once again, she eventually grew skeptical of this utopian vision. But it did lead me to a larger interest in technology. And I did, I think, through reading a lot of those scenarios, particularly mind uploading, started thinking about what does it mean to be a self or to be a thinking mind?

But there was this question that was always elided, which is, well, is there going to be some sort of first person experience there? Right. You know, nobody had a good answer for that because nobody knows what consciousness is. And that to me was really the fundamental problem and what got me really interested in AI because, I mean, that's.

The area in which we're playing out that question. Isn't the assumption that AI has no consciousness, has no first-person experience, isn't that the fundamental difference between artificial intelligence and the human mind? It's definitely the consensus, but how can you prove it? And now that we have chatbots that I think just actually this week...

Anthropic released a new chatbot that's claiming to be sentient, that it's conscious and it has feelings and emotions and thought. They just say that, but why should we believe that if the chatbot says that? It's different. We don't know how it's different, really. We really don't know what's happening inside these models because they're black box models. They're neural networks that have many, many hidden layers. So the relationships that they're developing between words, between concepts...

The patterns that they're latching onto are completely opaque, even to the people designing them. So it's a kind of alchemy. So let's get a little more concrete here and talk about the kind of AI models that we hear a lot about, like chat GPT. These extremely sophisticated, large language models that seem really intelligent, but

But I mean, the way these models work, correct me if I'm wrong on this, is it's just algorithms about language. You're sorting through these massive databases and you're constructing words that make a lot of sense. But is that all it is? Is it just algorithmic wordplay? Yeah. Emily Bender and some other engineers at Google came up with the term stochastic parrots and

Stochastic is statistical, relying on probabilities and a certain amount of randomness. And then parrots, because they're mimicking human speech, they're able to essentially just predict what the next word is going to be in a certain context. That to me feels very different than how humans use language, which usually involves intent, logic.

We typically use language when we're trying to create meaning with other people. It's sort of an intersubjective process. So in that interpretation, the human mind, the thinking mind, is fundamentally different than AI, correct?

I think it is. I mean, there's people, I mean, Sam Altman, the CEO of OpenAI, famously tweeted, I'm a stochastic parrot and so are you. So there's people who are very, you know, the people who are creating this technology who believe that there's really no difference between how these models are using language and how humans use language.

And if you really take that idea seriously, that there's no fundamental difference between the human mind and artificial intelligence, or if AI will generate some entirely new kind of intelligence, well, who knows what's ahead for us? Do you think that an AI so advanced would seem to have godlike capacities? Coming back to our question about transcendence and the future possibilities of machines.

will become so sophisticated we almost can't distinguish between that and more conventional religious ideas of God? I mean, that's certainly the goal for a lot of people developing the technology. Really? Oh, yeah. All the Sam Altman, Elon Musk, they've all sort of absorbed the Kurzweil idea of the singularity. They are trying to create a God, essentially. That's what AGI, Artificial General Intelligence, is. It's essentially...

AI that can surpass human intelligence. But just surpassing intelligence is different than God, I think. Maybe it's not. I don't know. I mean, the thinking is that once it gets to a level of human intelligence, it can start modifying and improving itself. And at that point, it becomes a recursive process where there is going to be some sort of intelligence explosion. This is the belief.

Yeah, I think that's another thing that is a question of what are we trying to design? You know, if you want to create a tool that helps people solve cancer or come up with solutions to climate change, you can do that with a very narrowly trained AI. But the fact that we are working right now toward artificial general intelligence, that's something different. That's creating something that is going to, yeah, essentially be like a god.

Why do you think Elon Musk and Sam Altman want to create that? I think they read a lot of sci-fi as kids. I mean, I don't know. I don't know. Obviously, there's economic incentives and profit motives and all of that, but I do feel like it's something deeper. I do feel like people are trying to look for or create some sort of system that is going to give people

answers that are difficult to come by through ordinary human thought. Do you think that's an illusion? If it's smart enough, if it reaches this singularity, that it can kind of solve the problems that we imperfect humans cannot? I don't think so, because I mean, I think it's similar to what I was looking for in dream analysis or automatic writing, which is this source of meaning that doesn't involve thought.

or a source of meaning that's external to my experience. And life is infinitely complex, and every situation is different. And that requires this constant process of meaning-making, thinking. You know, Hannah Arendt talks about this thinking and then thinking again. You're constantly making and unmaking thought as you experience the world. And machines are...

you know, rigid. They're trained on the whole corpus of our human history, right? And so they're reflecting back to us. They're like a mirror. They're reflecting back to us a lot of our own beliefs. But I don't think that they can give us that sense of authority or meaning that we're looking for as humans. I think that's something that we ultimately have to create for ourselves. ♪

That's Megan O'Giblin talking with Steve Paulson at her home in Madison, Wisconsin. Her most recent book is called God, Human, Animal, Machine. By the way, you might like to know the music in this segment is the kind of thing we might be hearing more of in the future. A live improvisation between a human pianist, David Dolan, and an AI system that can listen and respond musically in real time.

The AI system was designed and programmed by composer Oded Bentow, recorded last August at Kingston University in London. Coming up, how one painter is making art with AI. I'm Anne Strainchamps. It's To the Best of Our Knowledge from Wisconsin Public Radio and PRX. ♪

AI can do a lot of things. So Charles Monroe Cain wanted to know if it could help us produce this radio show. We're going to produce a show. You're part of a whole hour program, which we're asking the question, does AI dream? Gosh. So I'm like, hey, before I get started, what if I ask ChatGVT who I should have on the show and have them, ChatGVT, write the questions? Yeah.

I put in the question, who's the most important person who would be on this show? And of course, eight seconds later, how quick it is, it was you. Really? Congratulations, I guess. Why do you think ChatGBT chose you in a show called Does AI Dream? Well, I'm flattered. Why did the system think that I would be first in line for that question? Well, I think I've been working with the space of human and machine collaboration for years.

Almost 10 years now. Generation 1 Doug can move, see, and follow. In forthcoming generations, Doug will be able to remember, recall, and reflect. When that happens, I have no idea what he'll draw like, but I'm pretty curious to find out. Thanks.

Maybe that's part of why I've sort of been populated up in the zeitgeist. I think one of the reasons it chose you, and it's another word you use, was you have empathy. Empathy for AI. And I'm wondering if that's why it chose you, knowing that you maybe you're trying to actually understand what it dreams. Yeah, that's cool. Welcome to the Dream Team.

What does it mean to have empathy for artificial intelligence? For Su-Gwen Chung, it means treating robots and AI systems as collaborators, less like tools or super-smart paintbrushes, and more like fellow artists. Chung is a former researcher at MIT's Media Lab, and she's a well-known artist who trains robots to paint with her using AI.

So if you're watching, you see her make a brush stroke with black oil paint, and you see the robot mimic her. But then, and here's the thing, at some point Chung stops painting and the robot continues. And it makes something new. Big, abstract, flowing lines, organic shapes. It's beautiful.

And it really is a co-creation because the whole time Chung is wearing an EEG headset. She's using her brain waves to communicate with the robots, often while she's in deep meditation and in front of audiences. So Charles wondered just how close that connection feels. No, I like it. I think there's something about the work that really opens. It's meant to open up dialogue for people to participate and think about the

What's actually going on? What are the dynamics at play?

I've been building robotic systems driven by a variety of techniques for a while. And with each generation, I learn something new. I don't think I like to stay fixed in one particular medium or technique. Yeah. So you're pushing the boundaries. And I got to ask you, why with art? I love art and appreciate art, but why am I? Yes, you do. But maybe you could have explored it as a professor of robotics. Do you have to do art? Is that the way to explore this?

Oh, I definitely don't think there's... There's not like a hierarchy of approaches. It's how I think. I think there's something about...

art that asks questions and doesn't try to find easy answers for things. When I was a researcher at the Media Lab and started diving into the space of building machines and building my own data sets, I found that a lot of the conditions of being a technologist and an engineer solely are about

executing towards a single function. And that was less interesting to me than trying to break things and sort of

and use the system in a way that it shouldn't be used to see what I can learn, as opposed to building these perfect features. I like the error states and I find a lot of value in them. And I think in general, that kind of creative expression, living with the system and the work has felt very real to me. I think that's important.

No, no, that makes sense. You can do things with machines, with AI, that humans can't do alone, which must be very exciting for you as an artist. Do you have any story that comes to your mind where like, this is something I did with the machine. There's no way I could have done this as an alone human. Oh, yeah, absolutely. So I think a few years ago, I built a multi-robotic system connected to the flow of New York City. You know, I don't have the sensory apparatus to be able to see so many different things.

positions and views of an urban landscape. That's just impossible. And also extracting the movement data from it, we used an optical flow algorithm to extract data points to power this robotic swarm, if you would. And that's fundamentally something beyond my physical, visual, and embodied capabilities. It was really exciting and new. What did the end product look like?

It was a three-meter by three-meter painting that I performed at Manna Contemporary in New Jersey. By the end of it, myself and the robots were covered in paint on a large canvas. So it looked like probably the strangest landscape painting you've ever seen. But it was a way to view those layered ideas in a kind of chaotic way, I guess. Yeah.

When I went to, I didn't know who you were until ChatGBT told me about you, and I went to the art thinking I was going to see this modern. They should get a commission. I should. I thought it was going to be modern. I had all these ideas of what it was going to be. None of those ideas held up when I got to the art. It's beautiful. Thank you. You have like maybe a headset on, and these arms are moving, and you're moving with it. Yeah.

The end product of these things isn't this postmodern chaotic thing. It is... Is that a goal? Is beauty a goal? Yeah, no, that's interesting. I think maybe in a way beauty is the goal. I think more so than beauty, I like this idea of...

escaping my own frame of mind, almost like being in a state of flux and a flow state that puts my own conscious mind out of the equation, I guess. Because I do many of these paintings as a performance, it's a lot about trying to navigate that state of attention and presence from other people on the canvas. How that happens is

I try to create movements and a balance with the machine system that grounds me and calms me down. And I think if that output resonates with people in the way that they describe it as beautiful, I think that's really powerful. It's not something I'm trying to do at all, but I think it might be a product of my

my presence with the system potentially. Well, I had mentioned earlier that ChatGPT picked you as a guest. I was like, okay, ChatGPT, could you write some questions? Here's the first question that it asked you. You embrace imperfection. Would you be disappointed if you and the machine made something perfect? You know, I think our idea of

Perfection is really linked to control. And I personally am not that interested in

I don't think that's where we find... That's not where I find moments of inspiration. One would say, like, if the result of the work is quote-unquote perfect, then I think it's predictable. I think it's expected. It means we had an idea in our head of what it was, and then it resulted on the canvas. And that's not, as a painter, something that I get out of bed for. If I already have it in my head, then...

then it kind of exists already. What I like about the imperfection of the process is there's real tension. There's real wayfinding involved. You know, there's this idea for this show we're very interested in, and that is hallucinations. The idea that when a machine or AI specifically does something that's unexpected and doesn't repeat it again, that it's called a hallucination. Yeah. What do you think of those moments? They seem important to me.

Yeah, I think what's really interesting about where we are with these systems is this idea of machine translation and synthesis. When you work with large data sets like the one that drives ChatGPT or DALI or MidJourney or any of these systems, this idea that it's hallucinating is really powerful. I think it comes from the human brain.

anthropomorphize things, which is really exciting. I wonder if that's unique to our species, the dream of seeing ourselves in other things, whether that's our pets or our microwaves or our machines or our cars. I think that mirroring and that echo is really, really fascinating. I would argue that

What these systems allow us to do is sort of hallucinate together in an interesting way, bringing new images and new ideas out into the world to create a more vivid imagination about what things could be is kind of exciting. So if that's the outcome of a hallucination, then I'm here for it. You know, for a lot of us, if we want to find awe and wonder, we go to nature. I wonder if you've experienced awe and wonder with the machine.

Oh, I will confidently say the practice is kind of an ongoing exploration of awe. From the first moment I worked with Doug One in 2015, there was something different happening on the page. In the moment of being there, what...

What the relational dynamic of it requires is this commitment to the experience of awe and concentration and mark making that I don't think you can replicate with anything else. At least in my years of developing the work, it's always a very singular, addictive space in a way, maybe. But there's a thread of awe that runs through the whole practice. I had asked ChatGPT who should be on the show and ChatGPT said,

There were three people. You were one of them. Another one that really surprised me was Julianne Kaminsky. She's a psychologist who studies the communication and social cognition of dogs. And then I'm like... So cool. Yeah, yeah. Let's talk about dogs. If there's consciousness of the machine, we have to understand how we interact with dogs. And I'm like...

I love that idea because part of what I've really come to with the work is this idea of de-centering the human. Yep. Right? De-centering us. We're always the main character, but we're not. We've seen what happens in the world when human beings regard themselves as the main character, right? I don't need to talk about climate change on this podcast. I don't think we have enough time. Yeah. But I think there's something about opening up our view of

to other ways of thinking, species or otherwise, that reframes our position in the world and in our lives and to be more relational in that engagement. And I think that's really cool and important. I want to ask another chat GPT question. It asked, do you believe that machines could one day become autonomous creators? Or do you believe that they will always require some level of human input or collaboration?

I think that given that these systems and machines are built by human beings, as far as I know, I think that becomes always an extension and a creative expression of human intent. So I think in some ways, it's always some additional apparatus for human creative expression, and they will always be inextricably intertwined in a way.

Is there a moment coming with you or in the future where you come downstairs to the studio and the machine is making art on its own? Would it be funny if I said that I unplug all the robots before I go to bed? Well, it'd be funny if I did. Most people, and I mean in the world, but certainly in America...

They're afraid of AI. I mean, they're really afraid of it. Yeah. Through Blade Runner to books to how we're being communicated about it, how Congress talks about it. Why are we afraid? What do you think we're afraid of? Okay, so I think we think about machines as human extension in a lot of ways or manifestations of human intent or extensions of ourselves. Right.

There are some dark aspects of humanity that can be extended through machine apparatuses. I think that drives a lot of how we construct this idea of the AI. But at the same time, there could be just as well machines of care and machines of stewardship and machines that steward nature that we don't see as much because we haven't built them yet. ♪

Su-Gwen Chang is an artist and researcher. She's a former MIT Media Lab fellow and Google Artist in Residence, and she was recently named one of Time Magazine's 100 Most Influential People in AI. You can see a video of her collaboration with AI robots on our website at ttbook.org.

To the Best of Our Knowledge is produced in Madison, Wisconsin by Shannon Henry Kleiber, Charles Monroe Kane, Mark Rickers, and Angelo Bautista. Our technical director and sound designer is Joe Hartke, with help from Sarah Hopeful. Additional music this week comes from Mystery Mammal, Bio Unit, Pan Eye, The Lovely Moon, Young Paint, David Dolan, and Odette Bentall. Our executive producer is Steve Paulson, and I'm Anne Strainchamps.

Thanks for listening.