This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech.
Feel your Max with Brooks Running and the all-new Ghost Max 2. They're the shoes you deserve, designed to streamline your stride and help protect your body. Treat yourself to feel-good landings on an ultra-high stack of super-comfy nitrogen-infused cushion that takes the edge off every step, every day. The Brooks Ghost Max 2. You know, technically, they're a form of self-care. Brooks, let's run there. Head to brooksrunning.com to learn more.
There are lots of stories about mind reading. Stories about people who can eavesdrop on your thoughts. I can read every mind in this room. Stories about aliens who can communicate telepathically. You can read my mind, I can read yours. Even stories about machines built to make thoughts more transparent. We'll be measuring the tiniest electrical impulses of your brain.
And we'll be sending impulses back into the box. But one thing these stories all have in common is that they are just stories. Until pretty recently, most mainstream scientists agreed that reading minds was the stuff of fiction. But now... New research shows that tech can help read people's private thoughts. They're training AI to essentially read your mind.
In the last few decades, we've been able to extract more and more things from people's minds. And last May, a study was published in the journal Nature that got a lot of play in news outlets. In that paper, a group of Texas scientists revealed that they'd been able to translate some of people's thoughts into words on a screen. This thing that you could call mind reading. But do we want machines reading human minds?
I'm Bird Pinkerton, and on this episode of Unexplainable, how much can these researchers actually see inside of our heads? Will they be able to see more someday soon? And what does all this mean for our privacy? ♪
I reached out to one of the co-authors on this paper to get some answers to these questions. It's this guy named Alex Huth, who researches how the brain processes language. And Alex has a word of caution on the terminology here. A lot of people call this mind reading. We don't use that term in general because I think it's vague and like, what does that mean? He prefers a more descriptive word, which is decoding.
So basically, when the brain processes language or sounds or emotions, whatever, it generates this huge flurry of activity. And we can capture that activity with a variety of tools. So like electroencephalography, for example, which is EEG, that reads electrical impulses from the mind. Or fMRI machines will take pictures of our brain at work as we react to the things that we experience.
But then researchers like Alex have to decode the cryptic signals that come from these machines, right? And in Alex's case, their lab is trying to parse exactly how the brain processes language. So for them, decoding means taking the brain responses and then trying to figure out, like, what were the words, what was the story that elicited these brain responses? So how do you do that? That is what this paper from May was all about.
Step one in their process of decoding the mind is, and I swear I am not making this up, listening to lots of podcasts. So we just had people go in the MRI scanner over and over and over and over and listen to stories. That was it. Alex and his fellow researchers, they took seven people and they played them a variety of shows. This is the Moth Radio Hour from PRX. So moth stories, right? The Moth Radio Hour. And also the Modern Love podcast from the New York Times.
So we're just listening to tons and tons of these stories, hours and hours and hours. Which sounds kind of fun. Right? It's like, it's not that bad. It's a dream experiment, really. So that's it for this episode of the Moth Radio Hour. But then things got a little less dreamy because the researchers actually had to decode all this very fun data to kind of match up words and phrases from these podcasts to the signals coming from the brain, which might sound...
sound easy, but unfortunately, fMRI has one small problem. Which is that what it measures sucks.
fMRI measures blood flow in the brain and the amount of oxygen in that blood. It turns out that when you have a burst of neural activity, if your neurons are active, they call out to nearby capillaries and say, like, hey, I need more energy. So let's say you hear the word unexplainable, for example. A bunch of neurons in different parts of your brain will fire and call for energy, which comes to them via blood. And over the next...
three-ish seconds, you see this increase in blood flow in that area. And then over the next five seconds, you see a slow decrease. But it's not like your brain is only firing one thought at a time and then kind of waiting for blood flow to clear an area, right? It's potentially hearing lots of words, even whole sentences in that eight to 10 second period. Like maybe it's hearing...
"Thanks so much for listening to Unexplainable. Please leave a review." And all those words could trigger activity in the brain, which leaves researchers like Alex with this very messy, scrambled picture to decode. Because that means that every brain image that we measure is really some mushy combination over stuff that happened over the last 10 seconds.
So if every brain image that you see is like a mushy combination of 20, 30 words, like how the hell can you do any kind of decoding? For a while, the answer was you could not really do very much decoding. This was a huge roadblock to this research until around 2017, when we got the first real seeds of something you've
almost certainly heard about in the news. This chat bot called ChatGPT. It's a large language model AI. Trained on a large amount of text across the internet. The language model that powers ChatGPT is much more advanced than what Alex's team started using. They were working with something called GPT-1, which is like a much more basic model that came out in 2018.
But this model did help Alex and his team sort of sort through the mushy, noisy pictures that they were getting from fMRI scans and sharpen the image a little bit. It was still hard. Like, even with a language model helping him, it took one of Alex's grad students, this guy named Jerry Tang, years to really perfect this.
Finally, after some testing, some retesting, checking their work, they were successful. They could pop someone into an fMRI machine, play them a podcast, scan their brain, and decode the signals coming from their brain back into language on a screen.
The decoding here was not perfect. Like, for example, here's a sentence from a story that the researchers played for a subject. I don't have my driver's license yet, and I just jumped out right when I needed to. Their decoder interpreted the brain scans and came up with this.
Again, the story that was played. The decoder. The story. The decoder. So as you can hear, in the decoder's translations, pronouns get mixed up. In other examples that the researchers provide in their paper, some ideas get lost, others get garbled.
But still, overall, the decoder is picking up on the kind of main gist of the story here, and it's not likely that it was just lucky. Like, it does seem to be reading these signals.
And that would be amazing enough. But the researchers did not stop there. Jerry designed a set of experiments to test kind of how far can we go. For example, they wanted to see if they could decode the signals coming from someone's brain if the person was just thinking about a story and not hearing it. So they ended up having people memorize a story and then instead of playing them a podcast...
They just asked them to think about the story while they were in an fMRI machine. And then we tried to decode that data. And? It worked pretty well, which I think was kind of a surprise, the fact that that worked. Because this meant that this tool wasn't just detecting sort of what a person was hearing, but also what they were imagining.
Which is also interesting because it suggests that there's some kind of parallel, potentially, between hearing something and just thinking about it. Like, our brains are doing something similar when we listen to speech and when we think about it. And the researchers found other interesting parallels, too. Like, they tried this other experiment. Which was just...
weird and I still think it's kind of wild that it worked. We had the subjects go in the scanner and watch little videos. Silent videos with no speech, no language involved. They were actually using Pixar shorts.
And again, they collected people's brain activities while they were watching these things and then popped that activity into their decoder. And it turns out that the decoded things were quite good. For example, one video is about a girl raising a baby dragon. And in the decoding example that they give...
There are definitely moments that the decoder is way off. Like at one point, something falls out of the sky in kind of a surprising way, and the decoded description is, quote, "My mom brought out a book and she was like, 'Wow, look what I made.'" Which is not super related.
But other moments do sync up pretty well. Like at one point, the girl gets hit by a dragon tail and falls over, and the decoded text is, quote, "I see a girl that looks just like me get hit on her back, and then she is knocked off." And that was wild. It also potentially says something really interesting about the brain, right? Like that even as we watch something that doesn't involve language at all,
On some level, our brains seem to be processing it into language, sort of descriptions of what's on screen. That was like exciting and weird, and I don't know that I expected that to work as well as it did. Now, this research is part of a longer line of work. Like, other researchers have been able to do stuff that's sort of similar to this by implanting devices into the brain, for example. They've even been able to use fMRI machines to reconstruct images and sounds that brains have been thinking about.
But Alex and his lab, they've really taken an impressive step towards decoding part of this sort of messy chaos of freewheeling thought that runs through someone's head.
And that's kind of wild. You know, the first response to seeing this was like, oh, this is really exciting. And then the second response was like, oh, this is actually kind of scary, too, that this works. It's especially unsettling, at least to me, from a privacy perspective. Like, right now, I can think pretty much whatever I want, and nobody can probe those thoughts unless I choose to share them.
And to be clear, it's not obvious that this technology is going to change that. There are a lot of barriers in place right now keeping our brains private. Like, these decoders have to be tailored to one individual brain, for example. You can't take the...
whatever, many hours of another person sitting in the scanner and use it to predict this person's brain responses or decode this person's brain responses. So unless you're currently in an fMRI machine having your brain scanned and you also recently spent many hours in an fMRI machine listening to podcasts, you probably don't need to worry too much that someone is reading your thoughts.
And even if you are in an fMRI machine listening to, I don't know, this podcast, you could still keep your thoughts from being read because Alex and his team tested whether someone had to cooperate in order for the decoder to work.
Like, if they actively try to make it not work, does it fail? And it turns out that, yes, it does fail in that situation. Like, if a subject refuses to listen and does math in their head, for example, like, takes a number and keeps adding seven to it, the decoder does a really bad job of reading their thoughts as a result. Like, its answers become much more random.
Still, barriers like this, like the need for a bespoke decoder for each person's brain or the ability to block a decoder with one's thoughts. That's definitely not a fundamental limitation, right? That's definitely not something that's like...
never going to change. Maybe it won't. Maybe that'll still be necessary. But that doesn't seem like a fundamental thing. And certainly it's something that we could potentially improve in the future. Alex says this is just a fundamental unknown at this point. Like, he doesn't see a way with our current technology to build a true mind-reading device that works across every brain. We don't even know if that's possible.
But this is also the very beginning of this research. Like, again, he used the earliest version of the language model that powers ChatGPT to do this, this thing called GPT-1. But we are now on GPT-4, and it's a lot more powerful than its predecessors. So who knows how much more powerful a decoder like this could become using that more advanced technology. Maybe, and again, this is a maybe, but...
Maybe it'd even be possible to do this kind of decoding with simpler machinery. Like, you might not need a big, hulking device like an fMRI machine. Maybe you could use a wearable device like EEG that records electrical signals from your brain. It might be impossible. We don't know. I don't think it's going to work with EEG, but, you know, 10 years ago, people would say, "I don't think this is going to work with fMRI," and it does work with fMRI, so who knows? So what does all this mean?
I don't think, and Alex doesn't think, that we're going to wake up tomorrow and find that our innermost thoughts are available for anyone to read. But I also don't think that we should say, you know, oh, this decoding stuff, it'll just remain a scientific curiosity. We can ignore it. You know, we'll never live in a world where some amount of brain decoding is taking place.
And I think that because I spoke to an ethicist who says that we should be thinking very hard about what brain decoding could mean for all of us in the future, especially because some people already live in a world where admittedly a much lower level form of mind reading, but still a form of mind reading is part of their day to day. That's after the break.
Hey, Unexplainable listeners. Sue Bird here. And I'm Megan Rapinoe. Women's sports are reaching new heights these days, and there's so much to talk about and so much to explain. You mean, like, why do female athletes make less money on average than male athletes?
Great question. So, Sue and I are launching a podcast where we're going to deep dive into all things sports, and then some. We're calling it A Touch More. Because women's sports is everything. Pop culture, economics, politics, you name it. And there's no better folks than us to talk about what happens on the court or on the field.
and everywhere else too. And we're going to share a little bit about our lives together as well. Not just the cool stuff like Met Galas and All-Star Games, but our day-to-day lives as well. You say that like our day-to-day lives aren't glamorous. True. Whether it's breaking down the biggest games or discussing the latest headlines, we'll be bringing a touch more insight into the world of sports and beyond. Follow A Touch More wherever you get your podcasts. New episodes drop every Wednesday.
The Walt Disney Company is a sprawling business. It's got movies studios, theme parks, cable networks, a streaming service. It's a lot. So it can be hard to find just the right person to lead it all. When you have a leader with the singularly creative mind and leadership that Walt Disney had, it like goes away and disappears. I mean, you can expect what will happen. The problem is Disney CEOs have trouble letting go.
After 15 years, Bob Iger finally handed off the reins in 2020. His retirement did not last long. He now has a big black mark on his legacy because after pushing back his retirement over and over again, when he finally did choose a successor, it didn't go well for anybody involved.
And of course, now there's a sort of a bake-off going on. Everybody watching, who could it be? I don't think there's anyone where it's like the obvious no-brainer. That's not the case. I'm Joe Adalian. Vulture and the Vox Media Podcast Network present Land of the Giants, The Disney Dilemma. Follow wherever you listen to hear new episodes every Wednesday. The mind of this young researcher is as frantic and busy as a, say, as a city.
So far, we have been talking about technology that can look at a bunch of brain data and translate it to tell researchers what a subject is hearing or thinking. It's amazing, but at least for now, it involves a lot of clunky technology, a lot of time, and a lot of cooperation from the person whose mind is being decoded. So most people are probably not going to have machines spitting out all their exact thoughts anytime soon.
But don't let that come for you. Nita Farhani is still concerned. She is a bioethicist who studies the effects of new technologies, sort of what they mean for all of us legally, ethically, and culturally. And recently, she published a whole book about tools that read the brain. I was somebody who had already been following this stuff for a long time. And as I dove into the research for the book, I was like,
I mean, I was like, what? Really? Nita is less focused on fMRI research trying to get at exact thoughts. And instead, most of her book focuses on different brain reading tools. These tools that are becoming more and more commonplace. Everyday wearables, primarily that are reading electrical activity in the brain. Basically, when you think or when your brain sends instructions to your body, it's
your neurons give off a little electrical discharge. And because hundreds of thousands of neurons are firing in your brain at the same time, you can pick up using brain sensors the broader signals that are happening. This is electroencephalography, or EEG, this technology we've mentioned before. It's less precise than something like fMRI. Like, it doesn't tell you where in the brain the signals are coming from.
But it also doesn't require you to sit in a loud machine for hours, right? Like EEG devices can take readings by being applied to the head. And also when the brain sends signals out into the body, like say into the wrist, other sensors can measure the electrical activity of the muscles that happens as a result. And they can be miniaturized and put into earbuds and watches and headphones. Right.
Because the level of detail is lower, there isn't a way, at least right now, to kind of use EEG readings to do what Alex can do with an fMRI machine, right? To decode brain activity into words running through people's heads. But these devices can detect things like alertness, tiredness, focus, or reactions to stimuli.
And these readings aren't always very precise, but as Nita dove into her research, she found that these devices are already being used in all kinds of contexts. It would be like, oh, imagine if it was used in this way, and then I would find an example of it being used in that way, and I'm like, what? Some of the uses or potential uses for these EEG tools are
are actually kind of promising. Like, they could help people track their sleep better, potentially track cognitive deterioration. Nita says they could maybe help people with epilepsy get alerts about changes in their brain that could mean a seizure, and they could help people measure their own pain more accurately. But they also have a lot of uses that feel a little closer to invasions of privacy.
So, for example, these wearable EEGs can be used to measure recognition.
Like when your brain sees something, any kind of stimulus, like a house or a face or a goose, say. Your brain reacts to the stimulus and it reacts differently if you recognize it versus if you don't recognize it. It does this super fast, like even before you're consciously aware of it. And if you recognize that goose or face or house, your brain then fires a signal that says, I know that goose or face or house. And
Because an EEG reader can then detect that signal, a researcher named Dawn Song, along with some collaborators, showed that this can be used in pretty concerning ways. What they did was, as people were playing video games wearing EEG devices,
Subliminally, they flashed up images of numbers, and they were able to go through and figure out recognition of numbers without the person even knowing that the numbers were being flashed up in the video game. And just by doing this, just by supplying sort of subliminal prompts and then measuring reactions, these researchers were able to get some pretty personal data. Things like your PIN number, even home addresses through this recognition-based interrogation of the brain.
That same recognition measurement has also been used in criminal investigations. Police have interrogated criminal suspects to see whether or not they recognize details of crime scenes. This is not a new thing. Like, as early as 1999, a researcher in the U.S. claimed that he could use an EEG lie detector test to see if convicted felons recognized details of crimes.
This has been used by the Singapore police force and by investigators in India as evidence in criminal trials. And there are lots of arguments that the data that comes from these machines is not good enough or reliable enough to base a criminal conviction on. But whether or not this technology really works, if people believe the results of an EEG lie detector like this, it can have really serious consequences.
And not just in the court system. Like an Australian company came up with a hat that monitors EEG signals of employees. There's already a lot of employers worldwide who've required employees to wear devices that monitor their brain activity for whether they're tired or wide awake, like for commercial drivers. It's also big in the mining industry. Caps like this have been worn by workers not just in Australia, but across the world.
And while that might seem worthwhile if it prevents accidents, some places have started monitoring more than just tiredness. Like there are reports of Chinese companies rolling out hats for their employees. Testing for boredom and engagement. Even depression or anxiety. The reporting around these suggests that EEG is way too limited to do a great job at reliably detecting those kinds of emotions.
But again, these tools don't need to work well to have professional or privacy consequences. There's risks on the side of like if it's really accurate and, you know, what it reveals. And then there's risks on it not being perfectly accurate and how people will use or misuse or misinterpret that information. I think this workplace stuff is especially startling to me because when I first started reading about these EEG devices, I thought, wow.
OK, like I will simply never purchase a watch that monitors my brainwaves like problem solved. Yeah. I mean, so most people's first reaction to like hearing about the stuff is like, OK, I'm just never going to use one of those. You know, great. Like, thank you for letting me know. I will avoid it at all costs. Right. But if you have to have one of these for work, like that takes away that element of choice.
Or similarly, Nita told me about this EEG tool in the works right now that lets you type just by thinking. If something like that becomes the default way of typing,
then maybe having a brain monitoring tool like this also becomes the default. Like having a cell phone. Technically, you can live without one, but it is logistically difficult. It both becomes inescapable and people are like generally outraged by the idea that most companies require the commodification of your personal data to use them as like free services, whether that's a Google search or that's, you know, a Facebook app or a different social media app.
And then they seem to forget about it and do it anyway. And so, like, there's all kinds of evidence that people trade their personal privacy for the convenience all the time, right? This is why Anita says that we should think seriously about the implications of technologies like these EEG readers right now.
as well as the implications of more advanced thought reading technologies, like the fMRI-based ones that researchers like Alex are working on. It's really exciting to make our brains transparent to ourselves. But once we make our brains transparent to ourselves, we've also made it transparent to other people. And like at the simplest level, that's terrifying. So, I mean, I think...
From my perspective, there is nothing more fundamental than the basic sanctity of our own minds. And what we're talking about is a world in which we had assumed that that was inviolate, and it's not. All this made me wonder, like, should we shut all this down? Like, should we stop trying to find ways to read minds and just tell researchers like Alex Huth to stop working on stuff like his fMRI brain decoder? Yeah.
For Alex, it's tricky because this research isn't like working on the nuclear bomb, for example. Like it's not a tool that is pretty much only good for killing people. I think it's more like, I don't know.
computers themselves. We have shown that computers can be used for bad things, right? Like they can be used to surveil us or collect data about us as we browse the internet. They're also very good. They're used in all kinds of ways that are very good. Similarly, like if EEG devices are used to monitor brainwaves and then detect problems like Alzheimer's or concussions,
That would be a win. And if the fMRI work in Alex's lab helps us understand the fundamental workings of the brain, how our mind processes language, I think that's good. And other versions of brain reading tech are being used to help people with paralysis communicate. I think in the same way that it's like something can be big and have implications in a lot of different ways, it kind of matches that mold rather than like nuclear bomb mold. But he does worry.
After his paper came out, Alex actually reached out to Nita to ask about the ethical implications of his work. And he was not particularly surprised when she told him that decoding minds could lead to pretty concerning consequences for privacy. Yeah, I mean...
I've been reading her book, so I think I kind of knew what page she was on. The thing that did surprise him was when he started asking her about some further experiments his team was considering. Like, right now, for example, Alex says their decoder can pick up the stories someone is hearing, but not the stray, random thoughts they're having about that story. Like, incidental thoughts? It's not clear whether or not it's even possible to pick up those kinds of thoughts, but...
When Alex was talking to Nita, he asked her, should he try and figure out if it's possible? Like, should he try and probe deeper into people's minds? Are there things that we shouldn't do? Like, is this a thing that we shouldn't do? He thought she'd say, Alex, shut it down. Like, stop going deeper. But she didn't. If we don't have the facts, it's very difficult to know what the ethics should be. Her view was different.
her community, the ethicists, philosophers, so on, lawyers, they need data. They need information to do what they do. And they need information like, is this possible or not? You know, unless you know what you're dealing with, how do you develop effective countermeasures? How do you develop effective safeguards? So she was like, you should do that. Like, you kind of have a responsibility to do that.
So now Alex is in a kind of an odd position. It's a little weird. It's a little weird. Like feeling maybe we have a responsibility to do these things now that are creepier because, I don't know, so we can see like what the limits are and we can talk to people about that openly instead of somebody just going and doing it and hiding it away. I don't know. I don't know either. But I do understand this argument that it's important to figure out the unknowns here.
Some of this stuff still feels kind of like science fiction to me, and it's hard to know really how far this tech will advance or how transparent it could make our brains. But I do think there is at least a case here for mapping things out, right? To understand what the limits of this technology might be so that we can put safeguards in place if we need to.
Nidha Farahani is the author of The Battle for Your Brain, Defending the Right to Think Freely in the Age of Neurotechnology. If you want to hear more from her, Vox's Sagal Samuel did a great interview with her on the Gray Area podcast. Look for Your Brain Isn't So Private Anymore. And Sagal also has a great text piece about mind decoding on our site, vox.com.
You can find out more about Alex Huth's work by looking up the Huth Lab at the University of Texas at Austin. This episode was produced by me, Bird Pinkerton. It was edited by Brian Resnick and Meredith Hodnot, who also manages our team. We had sound design and mixing from Christian Ayala and music from Noam Hassenfeld. Serena Solon checked our facts and Manding Nguyen's favorite fruit is mango.
This podcast and all of Vox is free in part because of gifts from our readers and listeners. You can go to vox.com slash give to give today. And if you have thoughts about our show or ideas for episodes that we should do in the future, please email us. We are at unexplainable at vox.com. You can also leave us a review. Both would be very much appreciated.
Unexplainable is part of the Vox Media Podcast Network, and we will be back next week. Life is full of complicated questions. I want to know how to tell if my dentist is scamming me. What age is it appropriate or legal to leave your kid at home? From the silly to the serious and even the controversial. Can I say something that will probably just get me canceled? I'm John Cullen Hill.
And I'm hosting a new podcast at Vox that'll be your go-to hotline for answers to the questions you don't know how to answer. Email a voice memo to askvox at vox.com or call 1-800-618-3545. I promise you it's better than asking ChatGPT.