cover of episode Connor Leahy on Why Humanity Risks Extinction from AGI

Connor Leahy on Why Humanity Risks Extinction from AGI

2024/11/22
logo of podcast Future of Life Institute Podcast

Future of Life Institute Podcast

AI Deep Dive AI Chapters Transcript
People
C
Connor Leahy
G
Got Stocker
Topics
Connor Leahy 认为,当前的 AI 竞赛缺乏对潜在风险的充分考量,一些组织和个人在追求 AGI 的过程中,更关注自身利益和意识形态,而非人类的福祉。他指出,AGI 的发展不应以经济利益为主要驱动力,更不能忽视潜在的风险。他主张以更科学、更谨慎的方式进行 AI 研究,并呼吁公众关注 AI 风险,积极参与到 AI 政策的讨论中来。 Got Stocker 作为主持人,引导了对话方向,并提出了一些质疑,例如 AI 进展是否主要由资源而非研究突破驱动、AGI 公司的动机是否与传统企业不同、公众该如何应对 AI 风险等。

Deep Dive

Chapters
Connor Leahy discusses the motivations behind writing The Compendium, an introduction to AI risk, emphasizing the need to create a comprehensive resource for a non-technical audience and addressing social and political dynamics often overlooked by those with conflicts of interest.
  • The Compendium was a team project aiming to create a comprehensive resource on AI risk.
  • It targets a non-technical audience and addresses arguments scattered across various sources.
  • The book highlights social and political dynamics often ignored due to conflicts of interest.

Shownotes Transcript

Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.   

Here's the document we discuss in the episode:   

https://www.thecompendium.ai  

Timestamps: 

00:00 The Compendium 

15:25 The motivations of AGI corps  

31:17 AI is grown, not written  

52:59 A science of intelligence 

01:07:50 Jobs, work, and AGI  

01:23:19 Superintelligence  

01:37:42 Open-source AI  

01:45:07 What can we do?