cover of episode The mind-reading potential of AI | Chin-Teng Lin

The mind-reading potential of AI | Chin-Teng Lin

2024/12/26
logo of podcast TED Talks Daily

TED Talks Daily

People
C
Chin-Teng Lin
Topics
Chin-Teng Lin: 传统的沟通方式例如键盘和触屏,效率低下且不自然,尤其对非母语使用者而言。 人工智能可以解决大脑与电脑之间信息传输的瓶颈,将大脑中的语音转化为屏幕上的文字。 研究团队开发的脑机接口基于大脑的自然工作方式,实现更自然的交互,通过人工智能解码脑电波信号,识别说话的生物标志物,最终实现通过可穿戴设备将大脑中的想法转化为文字。 目前,该技术在解码无声言语的脑电波信号方面取得了进展,准确率约为50%。技术通过传感器采集脑电波信号,利用深度学习和大型语言模型解码信号并转化为文字,对用户而言交互自然,通过意念和自然语言进行。 该技术还可以通过视觉注意力来选择物品,无需肢体动作。 目前仍存在技术挑战,例如干扰问题和个体差异问题,准确率有待提高。 该技术涉及隐私和伦理问题,需要谨慎处理。 该技术可以作为一种新的沟通方式,尤其对言语障碍者或需要保密场合的人群有益。 该技术使用自然语言和自然思维过程,没有对身体进行不自然的干预。 该技术有望实现通过意念将想法转化为屏幕上的文字。 Charles: (无明确观点,主要负责演示) Daniel: (无明确观点,主要负责演示)

Deep Dive

Key Insights

What is the primary goal of Chin-Teng Lin's research on brain-computer interfaces?

The primary goal is to develop technology that translates neural signals into text on a computer, enabling communication through silent thoughts. This aims to overcome the bottleneck of efficiently transferring thoughts from the brain to a computer.

How does AI contribute to decoding brain signals into words?

AI decodes brain signals by identifying biomarkers of speaking using EEG headsets. Deep learning is used to translate these signals into intended words, and large language models correct mistakes in EEG decoding, making the process natural and efficient.

What is the current accuracy of decoding silent speech into words?

The technology achieves around 50% accuracy in decoding brain signals into words when someone is speaking silently. This represents significant progress but also highlights ongoing challenges in improving accuracy.

What are the potential applications of brain-computer interfaces?

Brain-computer interfaces can enable communication for individuals unable to speak, facilitate hands-free control of devices, and provide a natural way to interact with computers. They also have applications in scenarios requiring privacy or silence.

What are the ethical concerns associated with brain-computer interfaces?

Serious privacy and ethical issues arise, such as the potential for others to access one's thoughts without consent. Ensuring user control over the technology and addressing these concerns are critical for its responsible development.

How does the technology handle different neural signatures?

Different people have unique neural signatures, which affect decoding accuracy. The technology is designed to adapt to these variations, but challenges remain in overcoming interference and improving consistency across individuals.

Shownotes Transcript

Translations:
中文

How will humans and machines work together in the future? We've spent so much time discussing how the world's changing. It would be absolutely absurd to believe the role of the CEO is not. This is Imagine This, a podcast from BCG that helps CEOs consider possible futures for our world and their businesses. Listen wherever you get your podcasts. ♪

Thank you.

It's a better way to money. See why at northwesternmutual.com slash TED. The Northwestern Mutual Life Insurance Company headquartered in Milwaukee, Wisconsin. Proving trust is more important than ever, especially when it comes to your security program. Vanta helps centralize program requirements and automate evidence collection for frameworks like SOC 2, ISO 27001, HIPAA, and more. So you save time and money and build customer trust.

And with Vanta, you get continuous visibility into the state of your controls. Join more than 8,000 global companies like Atlassian, FlowHealth, and Quora who trust Vanta to manage risk and prove security in real time. Now that's a new way to GRC. Learn more at vanta.com slash TED Audio. That's vanta.com slash TED Audio. You're listening to TED Talks Daily, where we bring you new ideas to spark your curiosity every day. I'm your host, Elise Hu.

How often?

Are you frustrated by the time it takes to accurately get things in your mind into a computer? It is even worse for people like me, whose first language is not based on letters. I live and work in Australia, but I am originally from Taiwan.

I moved to Sydney eight years ago, and now run a university research center there. Most of us use keyboard every day to get things in our mind into the computer. We have to learn to type. The fact that you have to learn to do something shows how unnatural it is.

The finger-driven touchscreen has been around for 60 years. It's convenient, but it's also slow. There are other ways to control the computers, joystick or gestures. They are not very useful in capturing the words in your mind. And it is worse. They are critical to communication for human beings. The problem is

is about to be over because of AI. Today, I will show you how AI can turn the speech in your mind into words on screen. Getting from the brain to the computer efficiently is a real bottleneck for any computer application. It has been my passion for 25 years.

Many of you or most of you have heard of Brain Computer Interface, BCI. I have been working on BCI for the direct communication between the brain and machine since 2004. I developed a series of EEG headsets that do this, but they are not new.

What is new is an interface that works in a natural way based on how our brain is working naturally. Imagine reading words when someone is thinking, translating the brain signals into words. Today, you will see this in action and with no imprint.

We are using AI to decode the brain signals on the top of your head and identify the biomarkers of speaking. That means that you can send the words in your mind into the computer with wearable technology. It's exciting. I believe it will open up the bottleneck of how we engage with computers.

We are making exciting progress in decoding EEG to test us. It's natural. We have had very promising results in decoding EEG when someone is speaking aloud. The frontier we are working on now is to decode EEG when the speech is not spoken aloud. The words flow in your mind.

when you are listening to others or when you are talking to yourself or thinking. We are well on the way to make it a reality. I am going to invite two of my team, Charles and Daniel, to show it to us again. This is the first world premiere for us. We are getting around 50% accuracy in...

Decoding the brain signals into words when someone is speaking silently. Here shows how it will work. We have a collection of words that we have trained our technology with. They are combined into sentences. Charts will select one sentence.

and Daniel will read the sentence word by word silently and produce the brain signals that will be picked up by our sensors. Our technology will decode the brain signals into words. We pick up the brain signals with sensors and amplify and filter them to reduce the noise and get the right biomarkers. We use AI for the tasks.

We use deep learning to decode the brain signals into the intended words. And then we use the large language model to make the match of the decoded words and make up for the mistakes in EEG decoding. All of this is going on in the AI, but for the user, the interaction is natural through thoughts and in natural language.

We are very excited about the advances that we are making in understanding words and sentences. Another thing that is very natural to people is looking at something that has their attention. Imagine if you could select an item just by looking at it, not by picking it off the shelf or punching a cord into the vending machine.

Two years ago, in a project about hands-free control of robots, we were very excited about the robot control via visual identification of the flickers. We are now beyond that. We need not any flicker. The AI is making it natural. Daniel is going to look at the photos and select an item in his mind.

If it is working as it should, you will see the selected item pop up on screen. We use photos for this because they are very controllable. To show that this is not all, but just build into my presentation, Chas will pick up one item for Daniel to select in mind. Please, Chas. It's a car. So Daniel will select...

the car in his mind. "Temper is incorrect." It's unlucky that the 30% error rate came with us again. Let's invite Charles Daniel to show it again. When Daniel selects an item in his mind, his brain recognizes and identifies the object and triggers his EEGs. Our technology decodes the triggers. We are working on our way

to the technical challenges. We will work on overcoming the interference issue. That's why I asked the phone to be turned off. Different people have different neural signatures, which are important to decoding accuracy. One reason I brought Daniel along here is because he can give off great neural signatures. LAUGHTER

He can give us the great neural signatures as far as our technology is concerned. There are still cables here as well. It is not yet very portable. Probably one biggest barrier to people using this will be, how do I turn it off? Any one of you will have had the time to

When you are happy, the people you are with don't know what you are really thinking. There are serious privacy and ethics issues that will have to be dealt with. I am very passionate about how important this technology can be. One exciting point is linking the brain-computer interface to the wearable computers. You already have a computer on your head,

the brain will be a natural interface. It is not only about controlling a computer. The natural BCI also provides another way for people to communicate with people. For example, it allows people who are not able to speak to communicate with others or such as when privacy or silence are required.

If your idea of nature is a lovely forest, you could wonder how nature this could be. My answer is, it's natural language. It's the natural thought process that you are using. There are no unnatural imprints in your body. I'm challenging you to think about what you regard as natural communication.

turning the speech in your mind into words. There is a standard way to finish up when talking with people. You say, just think about it.

I hope you are as excited as we are for the prospect of a future in which when you just think about something, the words in your mind appear on screen. Thank you. That was Chin-Ted Lang at TED AI Vienna in 2024. If you're curious about TED's curation, find out more at ted.com slash curation guidelines.

And that's it for today. TED Talks Daily is part of the TED Audio Collective. This episode was produced and edited by our team, Martha Estefanos, Oliver Friedman, Brian Green, Autumn Thompson, and Alejandra Salazar. It was mixed by Christopher Fazi-Bogan. Additional support from Emma Taubner and Daniela Balarezo. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed. Thanks for listening.

You know, as a busy mom, there are a few ways you can build strong muscles. You could get a gym membership, which you'll never use, buy all sorts of expensive equipment for your garage that you'll forget you have, pay for a personal trainer that you'll never have time to meet with, and buy a fitness watch that only makes you sad every time you look at it.

So good. So good.

What makes a great pair of glasses? At Warby Parker, it's all the invisible extras without the extra cost. They

Their designer quality frames start at $95, including prescription lenses, plus scratch-resistant, smudge-resistant, and anti-reflective coatings, and UV protection, and free adjustments for life. To find your next pair of glasses, sunglasses, or contact lenses, or to find the Warby Parker store nearest you, head over to warbyparker.com. That's warbyparker.com.