On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org.com.
Since World War II, countries like Cuba have used shortwave radio to communicate with their spies. If you have the code, those numbers obviously are all words, and they were instructions to Ana, and that told her where to look and what to do that particular week. This week on Criminal, the story of a woman who spied on the U.S. government for 17 years and how she was caught.
Listen to our latest episode, Anna, wherever you get your podcasts. For a lot of people, figuring out what you're meant to do with your life is a long, winding process.
But for some lucky ones, a career path becomes clear in an instant. Well, I've always been very interested in music. I spent all my time playing the piano and composing and so on. For Diana Deutsch, that moment happened back in the 50s. But it didn't go exactly how she imagined it.
My music teacher performed at the BBC third program in the mornings. She was playing piano in a trio. And I was asked to be a page turner. Essentially, she'd be turning the pages of the sheet music so her teacher wouldn't have to stop playing. So I went up to BBC House and...
Diana had always dreamed of being a musician, so even just turning pages on the BBC felt like the big time.
Unfortunately, my hand jerked and all the pages flew down onto the floor. The poor lady had to, while playing the piano with one hand, pick up the pieces with the other one. It was a terrible experience.
Diana came face to face with her dream, and she knew with complete clarity that it wasn't for her. It certainly made me realize that being a performing musician was probably not a good idea for me.
Instead of aiming for a career as a performer, Diana got into researching the psychology of music, particularly how different people perceive sounds. And she was one of the first people to study this by generating synthesized tones using enormous mainframe computers.
One day in 1973, she was experimenting with playing two sequences at the same time. And I had no idea what would happen, but I thought it would be interesting to try. You can actually hear exactly what Diana heard back then, but only if you're listening on headphones. So if you have a pair around, now would be a good time to put them in. I started off with a high tone alternating with a low tone in one ear, and at the same time, a low tone alternating with a high tone in the other ear.
High-low on one side, low-high on the other. And what I heard seemed incredible. I heard a single high tone in my right ear that alternated with a single low tone in the left ear. Both ears were getting high-low sequences, but she wasn't hearing them in both ears. She only heard high tones on the right and low tones on the left. Just as a kind of knee-jerk reaction, I switched the headphones around
And it made no difference to what I perceived. The high tones remained in my right ear and the low tones remained in my left ear. If you have headphones on, flip them around. There's probably no difference. I went out into the corridor and pulled in as many people as I could. And by the end of that afternoon, I must have tested, oh, I don't remember how many, but probably dozens of people.
And most of them heard exactly what I heard. Diana literally couldn't believe it. I was beside myself. It seemed to me that, you know, I'd entered another universe or I'd gone crazy or something. It just seemed that the world had just turned upside down. I'm Noam Hassenfeld, and this is Making Sense, a new series from Unexplainable about the weird, perplexing, enormous unknowns of our senses.
We're starting by trying to make some sense of sound. What are we actually hearing when we're hearing? How much of it is the real world? And how much is constructed in our brain? All knowledge must come through the senses. All that we perceive and all of the awareness of our daily existence. Light. Double rainbow. Oh my god. Sound. Listen to me. Listen to me. Touch. Squeezy. Odors. Ew. And tastes. Mmm. Mmm.
What are your thoughts concerning the human senses? As meat and wine are nourishment to the body, the senses provide nutriment to the soul. All that we perceive, see, all the awareness, hearing, all knowledge must come through the senses. I have an incredible sense of touch. All that we perceive, tasting, all the awareness, smelling, all knowledge must come through the senses. Doesn't make sense. So now we've all
Before we get to all the unknowns, let's start with what we do know about sound. Sound is rapid changes in air pressure that happen when something is vibrating. Matthew Wynn, audiologist, University of Minnesota. So you can think of it in the same way that you think of a wave in a pond. None of the water particles move very far. They just sort of bob up and down, but they set a whole wave into motion. And it's like a domino effect moving through space.
This pressure wave travels through the air. And then, you know, a whole chain of events will set into motion in your ear.
The wave passes through the ear canal. The eardrum vibrates back and forth. And a few little bones amplify that vibration, sending it deeper toward the cochlea, the spiral-shaped organ in the inner ear that's covered with thousands of hair cells. The cochlea is where the sensory cells are that pick up the sound and turn it into something the brain can use. Pressure waves become electrical impulses, which are eventually interpreted as sound.
So this sounds like a long, complicated process, but it's extremely fast. I mean, there's no sense that's faster than hearing. Your ear can do this whole process thousands of times per second. All of that — the pressure waves, the ear vibrations, the transformation to electrical impulses — that's the simple part. The part we know. The complicated part is pretty much going to take up the rest of this episode.
Because there's a difference between the pressure waves that enter our ears and what we actually end up hearing. If we actually perceived every different sound that came in, we would be utterly confused. Take Matthew's voice, for example. Even in the room that I'm in right now, I'm just in a room in my house, there are echoes all around me. Because anytime you have a flat surface on a table, a wall, a computer screen, anything...
The sound will, in fact, reflect off of it. All of these echoes bouncing around should theoretically make sounds really hard to locate in space. And so if we hear that and then hear another echo coming from the wall on my right, and then I hear an echo coming off the ceiling and then my table, how would I know which direction the sound is coming from? It's coming from all directions. But our brain has an answer. Thankfully, our brain knows.
Sounds only come from one direction, and that's the only way the world makes sense. In order to function in the real world, our brain makes a guess. It perceives that first wave of sound coming in, and then every subsequent reflection of that sound, it's like saying, okay, I can suppress you, which is why a lot of people aren't even aware that there are echoes, because our brain is so good at suppressing them.
Our brain essentially edits our auditory experience. The way I like to phrase it is that the brain is being nudged in a direction rather than just straight out reading the world. Which is exactly what Diana stumbled across that day in the 70s when she was flipping her headphones back and forth. It just seemed that the world had just turned upside down.
These days, auditory illusions aren't as unheard of as they used to be. But Diana's a big reason why. She's now a psychology professor at UC San Diego, and she's been using computer-generated sounds to study the brain's editor for decades. With that first illusion she discovered, Diana thinks two parts of your brain are disagreeing: the parts that determine pitch and location.
That's why you hear a high tone on one side and a low tone on the other, even though they're really on both sides. And after finding that first illusion, Diana couldn't stop thinking about it. Of course I didn't sleep much that night.
This can't be the only illusion that does this kind of thing. Diana started wondering whether she could design other illusions to learn more about the brain's internal machinery. In the same way as, you know, if a piece of equipment, such as a car, breaks down, you can find out a lot about the way the car works just by fixing what went wrong. So she started brainstorming. I was sort of half asleep and I was imagining notes jumping around in space and...
By the next morning, they had sort of crystallized into what I named the scale illusion. The scale illusion. Just like before, this illusion consists of two tone sequences, one in each ear. So there's one channel alone. Some high notes, some low notes. And then the other channel alone. Some more high notes, some more low notes. And then you hear them together again.
If you're listening on headphones, you're probably hearing all the high notes on one side and all the low notes on the other, even though those notes are actually jumping from left to right. That's your brain editing the sounds. It's separating them to reflect the way the world usually is. In the real world...
One would assume that sounds that are in a higher pitch range are coming from one source and sounds in a lower pitch range are coming from another source. So that's what the brain assumes is happening here. The brain reorganizes the sounds in space in accordance with this interpretation.
Just like removing echoes, this kind of brain editing would normally help you make sense of the world. But Diana's illusion is explicitly designed to fool the brain into making a wrong guess. And not everyone's brain makes the same guess. "Left-handers as a group are likely to be hearing something different from right-handers as a group." Right-handers tend to hear high tones on the right side, but for left-handers, it's more complicated.
They're likelier than other people to hear high tones on the left or in even weirder ways.
All of this reorganization, the way the brain edits our hearing to help us navigate the real world, it's sometimes called top-down processing. Top-down processing occurs when the brain uses expectation, experience, and also various principles of perceptual organization to influence what is perceived.
Instead of bottom-up processing, which is sensing the world and then having that travel up to the brain, top-down processing means that our brain is influencing how we hear. To some extent, our brain is hearing what we are expecting to hear. In a sense, a lot of what we perceive isn't actually us hearing sound waves hit our eardrum. It's a prediction of what those waves should be.
To illustrate this, Diana uses something called the mysterious melody. This is a well-known tune.
but the notes are presented in different octaves. For all the non-music folks out there, an octave is basically a standard range of musical notes. In this illusion, the notes stay the same, but which range they're played in changes. So instead of playing Do, Re, Mi in the same range with all the notes next to each other, you could play Do, Re, Mi with the notes jumping into a different range.
So Diana takes a well-known tune, doesn't change the melody, just changes the range. And the question is, can people recognize this melody? And in fact, people can't recognize the melody. Now listen to a simplified version of the same sequence. In this case, all the notes are in the same octave. Same range.
You know what it is. Yeah, indeed, it's Yankee Doodle. And a lot of times when people go back and listen to the scrambled version, they can hear Yankee Doodle in there. When you have a frame of reference for what you're hearing, when you have an expectation, it actually changes what you're hearing. Illusions like this tend to circulate around the internet every once in a while. Like this one where depending on which word you're thinking of, you might be able to hear either Laurel or Yanny. Laurel. Laurel.
Remember last year when that Laurel versus Yanny thing, everybody's going nuts over? Well, there's a kiddie version of it making the rounds right now. This is from Jimmy Kimmel's show, and he starts by pulling up a clip from Sesame Street, of all places. I'm moved!
- Move it to follow you. - Move the camera? Yes, yes, that sounds like an excellent idea. All right. - And pay attention to this, 'cause tell me if you hear Grover say one of two things, that sounds like an excellent idea or that's a effing excellent idea. Are you ready? - Move the camera. - Yes, yes, that sounds like an excellent idea.
-Gilmore, what did you hear? -It's a ----ing hand-shaking idea. -You heard that? -Yes, I did, yeah. -It's the first time I heard it. I didn't hear a curse word at all. And then the next 12 times I watched it, the F-word was all I heard. But... -Just in case you want one more go at it, here's Grover maybe making a lot of parents upset. -Yes! Yes, that sounds like an excellent idea! -This type of misperception is true to an extent with all our senses. We've all seen visual illusions, or you might remember the debate around the dress.
But Diana eventually found that the various ways our brain edits the world, they're not just due to hard-coded differences, like whether you're right or left-handed. Brain editing can vary from person to person based on life experience. To prove this, she asked listeners to determine whether a pattern is going up or going down.
For people who know a bit of music theory, this interval is a tritone, which is exactly half of an octave. So to get from note to note, you travel the same distance whether you're going up or down. If you don't know that much about music, all you need to know is that this is a particularly ambiguous pattern. But Diana does something really interesting in her experiment here.
She plays the melody in a bunch of registers at the same time. So you might have an extra hard time figuring out if it's rising or falling. And sure enough, you get huge differences from one individual to the other. And this is something that really does surprise people.
I hear it going up, and Diana found that other people hear it going up, but some people hear it going down. What's truly mind-boggling is that Diana's found that the difference in how two people perceive this pattern, it might come down to where you grew up. Believe it or not, when Diana compared two groups, people from southern England and people from California, she found that the English people tended to hear this pattern as rising, whereas the Californians heard that same pattern as falling.
Diana's hypothesis is that based on where you grow up, you tend to hear different pitches as low or high. It has to do with the pitch range of the speech to which you have been most frequently exposed, particularly in childhood. So if you hear that first pattern, which goes from the notes D to G sharp as falling, you probably hear this second pattern, which goes the exact same distance from the notes A to D sharp as rising, or vice versa.
But ultimately, the mechanics of all this are still pretty much a mystery. Scientists don't really know how all this brain editing happens. I mean, we know that the brain does that, but we don't really know how. In a sense, it's almost like we're all listening to a play performed in our heads just for us. There's a script, the entire world of pressure waves bouncing around. But how we actually hear it all is up to the performers.
In so many ways, our brain dictates how we hear the world. But even though we don't know exactly how our brain does this, there are times when harnessing that brain magic starts to become a lot more important. It was like my hearing was pouring out of my head like water out of a cracked jar. Coming up after the break, one man's quest to hear his favorite piece of music again. That's next. Support for Unexplainable comes from Greenlight.
People with kids tell me time moves a lot faster. Before you know it, your kid is all grown up, they've got their own credit card,
And they have no idea how to use it. But you can help. If you want your kids to get some financial literacy early on, you might want to try Greenlight. Greenlight is a debit card and money app that's made for families. Parents can send money to their kids. They can keep an eye on kids' spending and saving. And kids and teens can build money confidence and lifelong financial literacy skills.
Oda Sham is my colleague here at Vox, and she got a chance to try out Greenline. There are videos you can watch on how to invest money. So we took a portion of his savings to put into investing where I told him, watch the videos so that he can start learning how to invest money as well.
Millions of parents and kids are learning about money on Greenlight. You can sign up for Greenlight today and get your first month free trial when you go to greenlight.com slash unexplainable. That's greenlight.com slash unexplainable to try Greenlight for free. greenlight.com slash unexplainable.
Hey, unexplainable listeners. Sue Bird here. And I'm Megan Rapinoe. Women's sports are reaching new heights these days, and there's so much to talk about and so much to explain. You mean, like, why do female athletes make less money on average than male athletes?
Great question. So, Sue and I are launching a podcast where we're going to deep dive into all things sports, and then some. We're calling it A Touch More. Because women's sports is everything. Pop culture, economics, politics, you name it. And there's no better folks than us to talk about what happens on the court or on the field.
and everywhere else too. And we're going to share a little bit about our lives together as well. Not just the cool stuff like Met Galas and All-Star Games, but our day-to-day lives as well. You say that like our day-to-day lives aren't glamorous. True. Whether it's breaking down the biggest games or discussing the latest headlines, we'll be bringing a touch more insight into the world of sports and beyond. Follow A Touch More wherever you get your podcasts. New episodes drop every Wednesday.
Unexplainable, we're back. And we've been talking about the mysterious way our brain filters, edits, and even reconstructs the world that we hear. For some people, this kind of brain magic can be interesting to highlight as a party trick. But for others, it can be way more important.
Okay, testing one, two, three, testing. This is Mike Korist. So it's like you take the word chorus and just add a T at the end. Mike's a science writer who was born with severe hearing loss, but he was able to use hearing aids. And starting from when he was 15, he became obsessed with Bolero, the famous piece by Maurice Ravel. It was this riotous melange with such a fascinating drum beat underneath it all.
It really thrilled me and fascinated me. He particularly loved the way the melody would gradually evolve over the course of the piece. Each repetition is on a higher level. It's louder. The resonance is deeper. It's only reached the climax. So it's a very auditorily overwhelming piece of music. He would listen to Bolero over and over and over. It was kind of my piece of music.
that I would come to again and again and again to test out new hearing aids. So it's always been an auditory touchstone for me. — And then, one day in 2001, the limited hearing he still had started disappearing. — I was standing outside of a rented car
And I suddenly thought that my batteries had died. My hearing aid batteries. Suddenly, the traffic on a nearby highway started sounding different. It was just that sound that you associate with cars going by. But all of a sudden it sounded more like as if somebody had dumped a whole bunch of cotton onto the highway. Pretty soon, Mike found out he was quickly losing what was left of his hearing.
It was like my hearing was pouring out of my head like water out of a cracked jar. So after about four hours after that initial realization, I was essentially completely deaf. It was just such a shocking experience. But Mike was eligible to receive a cochlear implant. It's a surgically implanted device that can offer a form of hearing in some deaf people.
Many people in the deaf community prefer to communicate using sign language or lip reading rather than using a cochlear implant. But for some people, especially people who've lost their hearing later in life and want to continue using their native spoken language, cochlear implants can be helpful tools. The cochlea is this tiny spiral-shaped organ inside your head.
And a cochlear implant is a string of electrodes that's carefully inserted inside that spiral organ. This is Matthew again, the audiologist, who actually works with cochlear implant users to help them understand their experience. There's this external part that looks like a hearing aid but is not a hearing aid. It's a microphone and a computer that analyzes the sound and sends instructions to those electrodes that are inside the ear.
The implant essentially bypasses a lot of the ear. It directly activates the cochlea, which then passes an electric signal onto the brain.
But cochlear implants don't just reproduce normal hearing. Mike says that reducing sound to digital ones and zeros and beaming them directly into your brain, it can sound strange. It was shocking. It was not at all what I expected. When Mike's implant was turned on, the first thing he did was listen to his own voice. And my voice sounded really weirdly high-pitched. I almost sounded like... You know, it was that kind of sound. It was like a...
It was like listening to a demented mouse. Matthew actually gave me a program he uses as an audiologist to simulate various types of cochlear implant sounds. So here's a general idea of what it might have sounded like to Mike. It was very upsetting. I thought the world would sound pretty much like I heard with hearing aids, just fuzzier.
I was completely unprepared for the huge difference in pitches. Because of the way the implants are designed, they tend to make everything seem a bit high-pitched. So when you send a signal to any part of the cochlear implant, the brain will interpret that as a high-pitched sound, even if it's a low pitch. Which is why everything can sound all mousy. But the interesting thing is, within just a day or two, I started to hear low pitches again. And part of that was my brain adapting to it.
Essentially, Mike's brain was editing the world for him.
He was taking command of his own top-down processing. And then Mike started training. I got the audio books of the Winnie the Pooh books.
And I remember the first time I put the tape into the cassette player and played "Way to Pooh and Some Bees." I think that's the one. I couldn't make it out at all. It was just complete gibberish. But he also had the physical book, so he read along with the tape. So I was able to start matching up the weird input that I was getting.
with the words on the page that told me what that infant meant. What about a story? said Christopher Robin. Could you very sweetly tell Winnie the Pooh one? This is what the S sounds like. This is what the phoneme Pooh sounds like. Winnie the Pooh. So it is a process of remapping.
According to Matthew, this process of brain remapping is a pretty normal experience for cochlear implant users. Any good audiologist would say to someone if they're thinking about a cochlear implant that when you first get it and it first is activated, you probably won't understand much at all. But over the first six months, maybe the first year, your brain learns to reorganize how it associates sound with meaning.
Training's more accessible these days. It's certainly not as DIY as it was for Mike 20 years ago.
But this kind of improvement can still be hard to believe. A lot of the people that I've worked with will say, now when I listen to my spouse, it sounds like her voice, which baffles all of us who work in this field. Because if you look at how the ear is being activated, there's no explanation. I mean, not to be too on the nose, but it's unexplainable, right? So there's no way that that could possibly be true. And yet a lot of people say it.
Tweaking settings on the implant does make it work better, but that doesn't account for most of this incredible improvement. A lot of the success of the cochlear implant is really a testament to how strong the brain is working, rather than a reflection of the high quality of the sound input. Our brains have an almost uncanny ability to predict language and fill in gaps.
even when we hear something muffled or distorted. But while cochlear implants work pretty well for speech, they don't work nearly as well for music. Music is just a much more complicated kind of sound. You need to distinguish melodies and harmonies and textures and most fundamentally, pitches. And an implant only has a small number of electrodes. You have to simplify all the frequencies and
You can think of it as like pixelating the sound. Making this even harder, because the cochlea is filled with fluid, it's hard to use electrical pulses to stimulate the exact part that codes for the right frequency. Instead, the pulses kind of spread out around the part that codes for that frequency. Let me make an analogy.
Suppose you're playing a note on the piano. You can be really careful and hit the exact key you want, or you can be kind of crude and put your whole hand down on the piano. Like, you're going to be in the right ballpark of the note, but you're not going to hit the exact note very clearly. So a cochlear implant is more like putting your whole hand down on the note. It's not a very precise frequency you're hearing.
When you take all of this into account, translating music with a cochlear implant can seem almost impossible. The current design of cochlear implants isn't set up really for music. It's set up to understand speech. But I wanted my Bolero back. Even though Mike's brain had learned how to edit those high-pitched tinny sounds to understand speech, music still wasn't the same. It just sounded awful.
I'm like, oh my God, you know. It was really shocking because even if it gets twice as good as this, it's still going to be awful. Even if it gets three times as this, it's still going to be awful. It was really bad. Mike upgraded the hardware of his cochlear implant. He upgraded the software. He even volunteered as a guinea pig for some tests on new equipment. So I would put on a set of headphones. I'd hear the set of beeps and boops.
I'm like, "Okay, which song is that?" And they're like, "I don't know." And it was like, "Could anybody know?" And for me, this was a very deeply frustrating kind of experiment because I know "Twinkle, Twinkle, Little Star." I was like, "That doesn't sound like 'Twinkle, Twinkle, Little Star' to me. How could this sound like 'Twinkle, Twinkle, Little Star' to anybody else?"
Researchers I spoke to told me that some cochlear implant users just don't enjoy music that much. It's certainly harder to get used to than speech. And because patients are often told to focus more on improving listening to speech, music can get left by the wayside. But appreciating music through an implant can sometimes be presented as an insurmountable obstacle.
You can see this in the movie The Sound of Metal, where a musician gets a cochlear implant after losing his hearing, and then goes to this performance, listening to the song you're hearing right now. In this scene, the movie shows what other people at the performance hear, and then it gradually shifts perspectives to highlight what the main character hears through his cochlear implant. The performance is so upsetting for the main character that he ultimately takes his processor off.
he essentially decides not to use his implant anymore. You can find a lot of simulations online like this. So I asked Mike if these kind of simulations, or even ones like the simulations I created of a distorted voice or a distorted bolero for this episode, if they seem like accurate representations of what music sounds like through an implant. I think you have to be extremely careful when listening to these simulations because basically what those simulations are telling you is
This is what the software is giving to the user. That's not the same thing as what the user hears. These are two very different things. When I listen to these simulations, and I have listened to them, it does sound a lot like what I heard on day one. It does not sound like what I hear in year 20.
For Mike, this was a combination of training himself with careful listening, but also tweaking the settings of the implant. Because with a lot of practice and effort and time, the experience of listening to music can improve. Yeah, I would listen to music over and over again. And I would try tweaking different settings. And I would go to my audio and I would say, these pitches sound really fuzzy to me. Can you do something about that?
And so she would tweak how much electricity went to different electrodes. And so this was an iterative process that went on and is still going on. After years of upgrades, tweaks, training, Mike's noticed some real improvement, but not for all music. Most of the piece of music that I enjoy is music that I heard with hearing aids. It's familiar to me.
Mike does listen to some new music, but preferring familiar music, it's a pattern that Matthew notices with his patients too. And I think it's a testament to the brain filling in those gaps, conjuring the memory of what the sound quality should be. The implant sort of gives you just enough that the brain can put together the whole puzzle. And of course, Mike is listening to Bolero again. Well, it sounds good. I really enjoy it. But there are things that I know that I'm missing.
I know that I'm still not getting some of that intensity and the purity where the music is reaching for a crescendo in each of its iterations. So I know I'm missing that. In a sense, bolero is so familiar, it's almost like language for Mike. Bolero sounds really good to me because I know exactly what it's supposed to sound like. This new bolero is certainly different from the version he remembers.
But Mike loves the new version. Even though the input I'm getting of Bolero is incomplete, and I can hear that it's incomplete, it is still a source of pleasure to me. Ultimately, we don't really know exactly how our brain is able to do this.
It can almost feel like magic, how it filters out echoes, how it shifts high tones to one ear and low tones to the other, how it can take a tinny, noisy input and rebuild a new version of Valero. We do this very complex calculation, but I don't think that we really know exactly how it's done. Psychologist Diana Deutsch again. There are an awful lot of things about our hearing that we don't understand.
And what we hear is often quite different from what, in point of fact, is being presented. But we do know that the brain is constantly editing, shaping and building the world that we hear. Our brain, our life experience, our familiarity with a piece of music, it all shapes how we hear and what we hear, which raises a pretty fundamental question.
When an orchestra performs a symphony, what is the real music? Is it in the mind of the composer? Or is it in the mind of the conductor who has worked long hours to shape the orchestral performance? Is it in the mind of someone in the audience who's never heard it before and doesn't know what to expect? And the answer is surely that there's no one real version of the music, but many, and
And each one is shaped by the knowledge and expectations that listeners bring to their experiences. The idea that, to a very real extent, our brains conjure different individual realities inside our heads, on the one hand, it's a clear reminder to be humble. And not just for hearing. No matter how certain we are, what we perceive isn't unfiltered reality. So it's worth questioning ourselves at our most stubborn moments. At the same time, though,
How cool are brains? I know they're this perfect reminder of our own subjectivity and humility, but I also just can't get over the fact that our brain puts on this fireworks show every day. And that a lot of people using a cochlear implant can tap into this almost magic ability to translate a few electrodes into this new emotionally satisfying experience without scientists really knowing how the whole thing works.
There's so much we still don't understand about the brain and how it tries to make sense of the world. And it just makes me that much more excited for everything we're going to learn along the way.
This is just the first episode of our Making Sense series. Next week, touch and its evil twin, pain. Think of yourself if you have a toothache or if you have a problem. If someone holds your hand or someone pats your back or gives you a hug, that relieves, actually. Gentle human touch can be very good.
After next week, we'll be talking about more perplexing sense mysteries like how scientists still don't really know how smell works, how many tastes there could be, why some people can't see images in their heads, and even a sixth sense.
This episode was edited by Catherine Wells, Meredith Hodnot, and Brian Resnick. It was produced and scored by me, Noam Hassenfeld. Christian Ayala handled the mixing and sound design with an ear from Afim Shapiro. Richard Sima checked the facts. Tori Dominguez is our audio fellow. Manding Nguyen is keeping things sunny. And Bird Pinkerton is dreaming of bioluminescence.
If you want to check out more about Diana Deutsch and auditory illusions, we've got a link in our show description where you can find more illusions to listen to and a ton of info about the illusions she's discovered. To read more about some of the topics we cover on our show or to find episode transcripts, check out our site at vox.com slash unexplainable. And if you have thoughts about the show, you can always email us at unexplainable at vox.com. Or you could leave us a review or a rating, which we would love to.
Unexplainable is part of the Vox Media Podcast Network, and we'll be back in touch with episode two of our Sense series next week.