Welcome your robot overlords! In episode 101 of Overthink, Ellie and David speak with Dr. Shazeda Ahmed, specialist in AI Safety, to dive into the philosophy guiding artificial intelligence. With the rise of LLMs like ChatGPT, the lofty utilitarian principles of Effective Altruism have taken the tech-world spotlight by storm. Many who work on AI safety and ethics worry about the dangers of AI, from how automation might put entire categories of workers out of a job to how future forms of AI might pose a catastrophic “existential risk” for humanity as a whole. And yet, optimistic CEOs portray AI as the beginning of an easy, technology-assisted utopia. Who is right about AI: the doomers or the utopians? And whose voices are part of the conversation in the first place? Is AI risk talk spearheaded by well-meaning experts or investor billionaires? And, can philosophy guide discussions about AI toward the right thing to do?
Check out the episode's extended cut here!)
**Nick Bostrom, SuperintelligenceAdrian Daub, What Tech Calls ThinkingVirginia Eubanks, Automating InequalityMollie Gleiberman, “Effective Altruism and the strategic ambiguity of ‘doing good’”Matthew Jones and Chris Wiggins, How Data HappenedWilliam MacAskill, What We Owe the FutureToby Ord, The PrecipiceInioluwa Deborah Raji et al., “The Fallacy of AI Functionality”Inioluwa Deborah Raji and Roel Dobbe, “Concrete Problems in AI Safety, Revisted”Peter Singer, Animal LiberationAmia Srinivisan, “Stop The Robot Apocalypse” **Modem Futura)**Modem Futura is your guide to the bold frontiers of tomorrow, where technology,... Listen on: Apple Podcasts) Spotify)
Support the show)
Patreon | patreon.com/overthinkpodcast) Website | overthinkpodcast.com)Instagram & Twitter | @overthink_pod)Email | [email protected])YouTube | Overthink podcast)