cover of episode Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI

Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI

2019/12/28
logo of podcast Lex Fridman Podcast

Lex Fridman Podcast

AI Deep Dive AI Insights AI Chapters Transcript
People
L
Lex Fridman
一位通过播客和研究工作在科技和科学领域广受认可的美国播客主持人和研究科学家。
M
Melanie Mitchell
Topics
Lex Fridman: 对"人工智能"这一术语的模糊性表示担忧,认为其含义因人而异,并且"智能"本身定义不清。他提出了一些替代术语,例如"认知系统"。 Melanie Mitchell: 同样对"人工智能"一词的模糊性表示担忧,认为其既有优点也有缺点。她认为,认知是一个复杂且难以界定的概念,包含记忆和感知等多个方面。她还回顾了人工智能发展早期人们试图用其他术语代替"人工智能"的情况,例如"智能系统"。她认为,随着人工智能技术的发展,人们对智能的理解也在不断变化,区分强人工智能和弱人工智能的界限在不断变化。我们最终可能创造出被认为具有智能的机器,但我们对智能的理解也会随之改变。要创造人工智能,可能需要对人类自身大脑有相当程度的理解。她认为,人类对人工智能的恐惧可能源于对自身地位的担忧。不同的人对人工智能可能取代人类哪些能力感到担忧。人类创造人工智能的驱动力可能源于对自身更好理解的渴望以及对机器的实用性需求。她本人对人工智能的兴趣源于对人类智能和更普遍的智能现象的好奇心。她认为,智能可能是复杂系统的一个连续统一体,不同层次的系统都可能表现出某种程度的智能。人类智能因其复杂性、自我意识和反思能力而显得尤为有趣。人类对人工智能的未来预测常常失误,部分原因是对自身智能的理解不足。创造出达到或超越人类水平智能的人工智能可能需要超过100年的时间。通往人工智能的道路可能需要更深入地理解类似人类认知系统的复杂机制,而不仅仅是依靠当前的机器学习方法。 Melanie Mitchell: 人工智能领域内部对该领域发展现状和未来方向存在多种观点,不同观点并非相互排斥,例如深度学习和认知科学方法可以结合使用。Copycat是一个模拟类比推理过程的程序,旨在探索类比在思维中的核心作用。类比是思维的核心组成部分,没有类比就没有概念。概念是思维的基本单元,类比是识别不同情境中本质相同的过程。类比是认知的基础,因为所有概括化都通过类比完成。感知、推理等认知过程都依赖于类比。为了在人工智能系统中模拟类比推理过程,需要构建能够进行心理模拟的内部模型。深度学习在感知中会发挥作用,但其局限性在于缺乏动态感知和概念理解能力。深度学习方法可能无法完全实现人工智能,因为其缺乏某些先天性的学习能力。要实现人工智能,可能需要发明新的硬件和软件数据结构。图灵计算可能足以实现人工智能,但可能需要改进算法和架构。认知架构方法,如CopyCat,在未来人工智能发展中具有潜力。感知过程涉及到生成模型、预测和反馈机制。深度学习和认知架构方法可以结合使用,以改进人工智能系统的感知和推理能力。要实现完全自动驾驶,可能需要具备常识性推理能力。要构建人类水平的人工智能系统,可能需要考虑具身智能和情感等因素。对超级人工智能的恐惧可能夸大了其能力,忽视了智能的复杂性和多维度性。价值观与能力对齐的问题可能在超级人工智能出现之前就已经存在。对人工智能的潜在风险应保持警惕,但更应关注更紧迫的威胁。图灵测试仍然是衡量人工智能的一个有效方法。复杂性是指系统中涌现出难以预测的复杂行为的现象。还原论方法并非总是能解释复杂系统中的涌现现象。

Deep Dive

Key Insights

Why does Melanie Mitchell not like the term 'artificial intelligence'?

The term 'artificial intelligence' is vague and means different things to different people. It also doesn't clearly define what intelligence is, as there are many types and degrees of it. John McCarthy, who coined the term, later regretted using it.

What is the main distinction between weak AI and strong AI according to John Searle?

Weak AI involves machines simulating thinking or carrying out intelligent processes, while strong AI posits that a machine is actually thinking, not just simulating it.

Why have humans historically been fascinated with creating artificial intelligence?

Humans have been driven by a desire to better understand their own thought processes and to create machines that can mimic human cognition. This fascination is deeply rooted in the mythology and ethos of the species.

Why are humans bad at predicting the future of AI development?

Humans struggle to predict AI's future because they have limited understanding of their own intelligence. Tasks that seem simple to humans, like vision, are actually incredibly complex and involve unconscious processes that are invisible to us.

What does Melanie Mitchell believe is the most important open problem in AI?

The most important open problem in AI is how to form and fluidly use concepts. This involves understanding how humans create and apply concepts, which is central to cognition.

What role does analogy making play in human cognition according to Melanie Mitchell?

Analogy making is fundamental to cognition. It underlies how humans form and apply concepts, recognize situations, and generalize knowledge. Without analogies, there can be no concepts, and thus no thought.

Why does Melanie Mitchell believe that autonomous vehicles face significant challenges?

Autonomous vehicles struggle with the open-ended nature of the real world, including edge cases and the long tail problem. They lack common sense and the ability to interpret obstacles correctly, leading to overly cautious behavior.

What does Melanie Mitchell think about the fear of superintelligent AI?

Mitchell believes that the fear of superintelligent AI is overblown. She argues that intelligence is not easily separable into dimensions like rationality and values, and that a superintelligent AI would inherently understand human values.

What is the Santa Fe Institute and what is its focus?

The Santa Fe Institute is a research organization founded in 1984 that focuses on interdisciplinary studies of complex systems. It brings together scientists from various fields to study emergent phenomena and general principles underlying complex systems.

What lesson from Douglas Hofstadter has influenced Melanie Mitchell the most?

Hofstadter taught Mitchell to idealize complex problems to their essence, focusing on the core aspects that need to be solved. This approach has been a guiding principle in her research, particularly in creating the Copycat program.

Shownotes Transcript

Melanie Mitchell is a professor of computer science at Portland State University and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives including adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture which places the process of analogy making at the core of human cognition. From her doctoral work with her advisors Douglas Hofstadter and John Holland to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans.

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". 

Episode Links: AI: A Guide for Thinking Humans (book)

Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

00:00 - Introduction 02:33 - The term "artificial intelligence" 06:30 - Line between weak and strong AI 12:46 - Why have people dreamed of creating AI? 15:24 - Complex systems and intelligence 18:38 - Why are we bad at predicting the future with regard to AI? 22:05 - Are fundamental breakthroughs in AI needed? 25:13 - Different AI communities 31:28 - Copycat cognitive architecture 36:51 - Concepts and analogies 55:33 - Deep learning and the formation of concepts 1:09:07 - Autonomous vehicles 1:20:21 - Embodied AI and emotion 1:25:01 - Fear of superintelligent AI 1:36:14 - Good test for intelligence 1:38:09 - What is complexity? 1:43:09 - Santa Fe Institute 1:47:34 - Douglas Hofstadter 1:49:42 - Proudest moment