cover of episode Is AI progress stuck? | Jennifer Golbeck

Is AI progress stuck? | Jennifer Golbeck

2024/11/23
logo of podcast TED Talks Daily

TED Talks Daily

AI Deep Dive AI Insights AI Chapters Transcript
People
J
Jennifer Golbeck
Topics
Jennifer Golbeck: 当前对人工智能的担忧,特别是关于人工智能超越人类智能并摧毁文明的警告,很大程度上是炒作,它转移了人们对人工智能实际问题的注意力,例如深度伪造和人工智能在司法系统中的种族偏见等。科技行业对AI危险性的警告,可能是为了吸引投资和制造话题。对人工智能超越人类的担忧,是一种有效的转移注意力的手段,让人们忽略了人工智能目前已造成的实际问题。 当前人工智能技术可能已经达到其能力的巅峰,未来的发展将是渐进式的改进,而非指数级增长。当前人工智能技术最大的挑战是可靠性问题,即算法经常出错。“幻觉”问题,即AI经常编造事实,是可靠性问题的一个子集,也是一个难以解决的问题。由于生成式AI的工作机制,其“幻觉”问题可能难以彻底解决。即使解决了人工智能的可靠性问题,其错误率仍然可能过高,使其实用性受到限制。 人工智能训练数据的问题,以及人工智能生成低质量内容的负面循环,限制了人工智能的进一步发展。当前人工智能领域的巨额投资回报率低,暗示其商业价值可能被高估。即使解决了人工智能的可靠性和数据问题,廉价的开源版本的存在也意味着公司不太可能大规模使用人工智能取代人力。人工智能技术目前无法解决偏见问题,这需要引起重视。在解决人工智能偏见问题之前,不应广泛采用和使用依赖人工智能的决策系统。 人类智能的核心在于人际关系、情感和创造力,这是人工智能无法复制的。不必过度担忧人工智能会接管人类文明,因为在最坏的情况下,人类仍然可以随时关闭它。

Deep Dive

Key Insights

Why are some tech industry leaders warning about the dangers of AI?

They aim to attract investors by emphasizing the potential existential threat, which can justify significant funding. Additionally, the idea of AI overtaking humanity is a cinematic concept that captures public interest, serving as a distraction from current AI-related issues.

What are the main challenges in achieving reliable AI?

The primary challenges include AI hallucination, where AI often makes up information, and the general unreliability of current AI models. Google, for instance, has admitted they don't know how to fix the problem of incorrect search results.

How does AI hallucination impact the reliability of AI tools?

AI hallucination refers to AI making up information, which is a significant issue. For example, ChatGPT can produce false threats of violence or incorrect legal cases, undermining its reliability in critical applications.

Why might AI not lead to widespread job loss?

AI could increase productivity without necessarily leading to job loss. For instance, if AI makes software engineers twice as efficient, companies might choose to keep both engineers and increase profits rather than lay off one.

What are the potential issues with AI and human biases?

AI trained on human data adopts human biases, which are difficult to eliminate. Attempts to put guardrails in place to prevent biased outcomes can sometimes create new problems, as seen with Google's AI image generator.

What distinguishes human intelligence from AI capabilities?

Human intelligence is defined by emotional responses, creative integration of past and new information, and genuine connections with others. AI, while capable of imitation, lacks these core human attributes.

What is the financial outlook for AI development?

Despite $50 billion invested in AI over the past few years, resulting revenue is only $3 billion. This suggests that current investment levels may not be sustainable, especially considering the high costs of hardware needed for AI improvements.

Chapters
Jennifer Golbeck discusses the hype and reality of AI, questioning whether progress in AI is accelerating or plateauing.
  • AI can outperform humans on specific tasks like chess.
  • There's concern about Artificial General Intelligence (AGI) surpassing human capabilities.
  • Tech industry leaders warn about AI's potential existential threat to humanity.

Shownotes Transcript

Translations:
中文

You're listening to TED Talks Daily, where we bring you new ideas to spark your curiosity every day. I'm your host, Elise Hu. You've heard so much about advances in AI in the past year, and a lot of it right here on this show.

It's vital to talk about, especially when we hear warnings about how AI could overtake human intelligence and destroy civilization. In her 2024 talk, computer scientist and AI researcher Jennifer Golbeck asks us to take a step back first. She cuts through the hype to clarify what is worth worrying about and what isn't when it comes to AI. It's coming up after the break.

Support for this show comes from Capital One. Banking with Capital One helps you keep more money in your wallet with no fees or minimums on checking accounts and no overdraft fees. Just ask the Capital One bank guy. It's pretty much all he talks about in a good way. He'd also tell you that this podcast is his favorite podcast, too. Oh, really? Thanks, Capital One bank guy. What's in your wallet? Terms apply. See CapitalOne.com slash bank. Capital One N.A. Member FDIC.

Does your AI model really know code? Its specific syntax, its structure, its logic? IBM's Granite code models do. They're purpose-built for code and trained on 116 different programming languages to help you generate, translate, and explain code quickly. Because the more your AI model knows about code, the more it can help you do. Get started now at ibm.com slash granite. IBM, let's create.

Support comes from Zuckerman Spader. Through nearly five decades of taking on high-stakes legal matters, Zuckerman Spader is recognized nationally as a premier litigation and investigations firm. Their lawyers routinely represent individuals, organizations, and law firms in business disputes, government, and internal investigations, and at trial, when the lawyer you choose matters most. Online at Zuckerman.com.

Like TED Talks? You should check out the TED Radio Hour with NPR. Stay tuned after this talk to hear a sneak peek of this week's episode. And now, our TED Talk of the day. We've built artificial intelligence already that on specific tasks performs better than humans. There's AI that can play chess and beat human grandmasters.

But since the introduction of generative AI to the general public a couple years ago, there's been more talk about artificial general intelligence, or AGI. And that describes the idea that there's AI that can perform at or above human levels on a wide variety of tasks, just like we humans are able to do.

And people who think about AGI are worried about what it means if we reach that level of performance in the technology. Right now, there's people from the tech industry coming out and saying, the AI that we're building is so powerful and dangerous that it poses a threat to civilization. And they're going to government and saying, maybe you need to regulate us.

Now, normally when an industry makes a powerful new tool, they don't say it poses an existential threat to humanity and that it needs to be limited. So why are we hearing that language?

And I think there's two main reasons. One is, if your technology is so powerful that it can destroy civilization, between now and then, there's an awful lot of money to be made with that. And what better way to convince your investors to put some money with you than to warn that your tool is that dangerous?

The other is that the idea of AI overtaking humanity is truly a cinematic concept. We've all seen those movies. And it's kind of entertaining to think about what that would mean now with tools that we're actually able to put our hands on. In fact, it's so entertaining that it's a very effective distraction from the real problems already happening in the world because of AI.

The more we think about these improbable futures, the less time we spend thinking about how do we correct deep fakes or the fact that there's AI right now being used to decide whether or not people are let out of prison and we know it's racially biased.

But are we anywhere close to actually achieving AGI? Some people think so. Elon Musk said that we'll achieve it within a year. But like at the same time, Google put out their AI search tool that's supposed to give you the answer so you don't have to click on a link. And it's not going super well. Now, of course, these tools are going to get better.

But if we're going to achieve AGI or if they're even going to fundamentally change the way we work, we need to be in a place where they are continuing on a sharp upward trajectory in terms of their abilities. And that may be one path, but there's also the possibility that what we're seeing is that these tools have basically achieved what they're capable of doing. And the future is incremental improvements in a plateau.

So to understand the AI future, we need to look at all the hype around it and get under there and see what's technically possible. And we also need to think about where are the areas that we need to worry and where are the areas that we don't.

So if we want to realize the hype around AI, the one main challenge that we have to solve is reliability. These algorithms are wrong all the time. And Google actually came out and said after these bad search results were popularized that they don't know how to fix this problem.

I use ChatGPT every day. I write a newsletter that summarizes discussions on far-right message boards, and so I download that data. ChatGPT helps me write a summary. And it makes me much more efficient than if I had to do it by hand, but I have to correct it every day because it misunderstands something. It takes out the context. And so because of that, we can't just rely on it to do the job for me. And this reliability is really important.

Now, a subpart of reliability in this space is AI hallucination, a great technical term for the fact that AI just makes stuff up a lot of the time.

I did this in my newsletter. I said, "ChatGPT, are there any people threatening violence? If so, give me the quotes." And it produced these three really clear threats of violence that didn't sound anything like people talk on these message boards. And I went back to the data and nobody ever said it. It just made it up out of thin air. And you may have seen this if you've used an AI image generator. We have to solve this hallucination problem if this AI is going to live up to the hype.

And I don't think it's a solvable problem with the way this technology works. There are people who say we're going to have it taken care of in a few months, but there's no technical reason to think that's the case because generative AI always makes stuff up. When you ask it a question, it's creating that answer or creating that image from scratch when you ask. It's not like a search engine that goes and finds the right answer on a page. And so because its job is to make things up every time,

I don't know that we're going to be able to get it to make up correct stuff and then not make up other stuff. That's not what it's trained to do, and we're very far from achieving that. In fact, there are spaces where they're trying really hard. One space that there's a lot of enthusiasm for AI is in the legal area where they hope it will help write legal briefs or do research.

Some people have found out the hard way that they should not write legal briefs right now with ChatGPT and send them to federal court because it just makes up cases that sound right. And that's a really fast way to get a judge mad at you and to get your case thrown out. Now, there are legal research companies right now that advertise hallucination-free generative AI.

And I was really dubious about this. And researchers at Stanford actually went in and checked it. And they found the best performing of these hallucination-free tools still hallucinate 17% of the time.

So like on one hand, it's a great scientific achievement that we have built a tool that we can post basically any query to. And 60 or 70 or maybe even 80% of the time, it gives us a reasonable answer. But if we're going to rely on using those tools and they're wrong 20 or 30% of the time, there's no model where that's really useful.

And that kind of leads us into how do we make these tools that useful? Because even if you don't believe me and you think we're going to solve this hallucination problem, we're going to solve the reliability problem, the tools still need to get better than they are now. And there's two things they need to do that. One is lots more data. And two is the technology itself has to improve. And now back to the episode.

So where are we going to get that data? Because they've kind of taken all the reliable stuff online already. And if we were to find twice as much data as they've already had, that doesn't mean they're going to be twice as smart.

I don't know if there's enough data out there and it's compounded by the fact that one way that generative AI has been very successful is it producing low quality content online. That's bots on social media, misinformation, and these SEO pages that don't really say anything but have a lot of ads and come up high in the search results.

And if the AI starts training on pages that it generated, we know from decades of AI research that they just get progressively worse. It's like the digital version of mad cow disease. Let's say we solve the data problem. You still have to get the technology better. And we've seen $50 billion in the last couple years invested in improving generative AI. And that's resulted in $3 billion in revenue.

So that's not sustainable. But of course, it's early, right? Companies may find ways to start using this technology, but is it going to be valuable enough to justify the tens and maybe hundreds of billions of dollars of hardware that needs to be bought to make these models get better?

I don't think so. And we can kind of start looking at practical examples to figure that out. And it leads us to think about where are the spaces we need to worry and not. Because one place that everybody's worried with this is that AI is going to take all of our jobs. Lots of people are telling us that's going to happen and people are worried about it. And I think there's a fundamental misunderstanding at the heart of that. So imagine this scenario. We have a company and they can afford to employ two software engineers.

And if we were to give those software engineers some generative AI to help write code, which is something it's pretty good at, let's say they're twice as efficient. That's a big overestimate, but it makes the math easy. So in that case, the company has two choices. They could fire one of those software engineers because the other one can do the work of two people now, or they already could afford two of them.

And now they're twice as efficient, so they're bringing in more money. So why not keep both of them and take that extra profit? The only way this math fails is if the AI is so expensive that it's not worth it.

but that would be like the AI is a hundred thousand dollars a year to do one person's worth of work. So that sounds really expensive. And practically there are already open source versions of these tools that are low cost that companies can install and run themselves. Now they don't perform as well as the flagship models, but if they're half as good and really cheap,

Wouldn't you take those over the one that costs $100,000 a year to do one person's work? Of course you would. And so even if we solve reliability, we solve the data problem, we make the models better, the fact that there are cheap versions of this available suggests that companies aren't going to be spending hundreds of millions of dollars to replace their workforce with AI. There are areas that we need to worry though, because if we look at AI now, there are lots of problems that we haven't been able to solve.

I've been building artificial intelligence for over 20 years, and one thing we know is that if we train AI on human data, the AI adopts human biases, and we have not been able to fix that. We've seen those biases start showing up in generative AI, and the gut reaction is always, well, let's just put in some guardrails to stop the AI from doing the biased thing, but

But one that never fixes the bias because the AI finds a way around it and to the guardrails themselves can cause problems So Google had an has an AI image generator and they tried to put guardrails in place to stop the bias in the results and It turned out it made it wrong. And so in trying to stop the bias we end up creating

creating more reliability problems. We haven't been able to solve this problem of bias. And if we're thinking about deferring decision-making, replacing human decision-makers and relying on this technology, and we can't solve this problem, that's the thing that we should worry about and demand solutions to before it's just widely adopted and employed because it's sexy.

And I think there's one final thing that's missing here, which is our human intelligence is not defined by our productivity at work. At its core, it's defined by our ability to connect with other people, our ability to have emotional responses, to take our past and integrate it with new information and creatively come up with new things. And that's something that artificial intelligence is not now, nor will it ever be capable of doing.

It may be able to imitate it and give us a cheap facsimile of genuine connection and empathy and creativity, but it can't do those core things to our humanity. And that's why I'm not really worried about AGI taking over civilization. But if you come away from this disbelieving everything I have told you, and right now you're worried about humanity being destroyed by AI overlords, the one thing to remember is,

Despite what the movies have told you, if it gets really bad, we still can always just turn it off. Thank you. This episode is brought to you by Progressive Insurance. You chose to hit play on this podcast today. Smart choice. Make another smart choice with AutoQuote Explorer to compare rates from multiple car insurance companies all at once. Try it at Progressive.com. Progressive Casualty Insurance Company and affiliates not available in all states or situations. Prices vary based on how you buy.

Something about the way we're working just isn't working. When you're caught up in complex A requirements, or distracted by scheduling staff in multiple time zones,

or thinking about the complexity of working in Monterey while based in Montreal. You're not doing the work you're meant to do, but with Dayforce, you get HR, pay, time, talent, and analytics all in one global people platform. So you can do the work you're meant to do. Visit dayforce.com slash do the work to learn more.

That was Jennifer Golbeck at TEDx Mid-Atlantic in 2024. If you're curious about TED's curation, find out more at TED.com slash curation guidelines.

And that's it for today. TED Talks Daily is part of the TED Audio Collective. This episode was produced and edited by our team, Martha Estefanos, Oliver Friedman, Brian Green, Autumn Thompson, and Alejandra Salazar. It was mixed by Christopher Fazi-Bogan. Additional support from Emma Taubner and Daniela Balarezo. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed. Thanks for listening.

Black Friday savings are going on now at National Tire and Battery. Save $300 on sets of four Goodyear tires and brake service for $178.99. NTB, depend on us to keep you driving. On the TED Radio Hour, in the middle school cafeteria, Ty Tashiro always sat with his equally nerdy buddies. The socially awkward kids were the furthest thing from cool. And he often wondered... Why am I so socially awkward and what am I going to do about that?

Now Ty is a psychologist and expert on awkwardness. And he has some answers. So awkward. That's next time on the TED Radio Hour from NPR. Subscribe or listen to the TED Radio Hour wherever you get your podcasts. PR.