Patreon: https://www.patreon.com/mlst) Discord: https://discord.gg/ESrGqhf5CB) Twitter: https://twitter.com/MLStreetTalk)
In this exclusive interview, Dr. Tim Scarfe sits down with Minqi Jiang, a leading PhD student at University College London and Meta AI, as they delve into the fascinating world of deep reinforcement learning (RL) and its impact on technology, startups, and research. Discover how Minqi made the crucial decision to pursue a PhD in this exciting field, and learn from his valuable startup experiences and lessons.
Minqi shares his insights into balancing serendipity and planning in life and research, and explains the role of objectives and Goodhart's Law in decision-making. Get ready to explore the depths of robustness in RL, two-player zero-sum games, and the differences between RL and supervised learning.
As they discuss the role of environment in intelligence, emergence, and abstraction, prepare to be blown away by the possibilities of open-endedness and the intelligence explosion. Learn how language models generate their own training data, the limitations of RL, and the future of software 2.0 with interpretability concerns.
From robotics and open-ended learning applications to learning potential metrics and MDPs, this interview is a goldmine of information for anyone interested in AI, RL, and the cutting edge of technology. Don't miss out on this incredible opportunity to learn from a rising star in the AI world!
TOC
Tech & Startup Background [00:00:00]
Pursuing PhD in Deep RL [00:03:59]
Startup Lessons [00:11:33]
Serendipity vs Planning [00:12:30]
Objectives & Decision Making [00:19:19]
Minimax Regret & Uncertainty [00:22:57]
Robustness in RL & Zero-Sum Games [00:26:14]
RL vs Supervised Learning [00:34:04]
Exploration & Intelligence [00:41:27]
Environment, Emergence, Abstraction [00:46:31]
Open-endedness & Intelligence Explosion [00:54:28]
Language Models & Training Data [01:04:59]
RLHF & Language Models [01:16:37]
Creativity in Language Models [01:27:25]
Limitations of RL [01:40:58]
Software 2.0 & Interpretability [01:45:11]
Language Models & Code Reliability [01:48:23]
Robust Prioritized Level Replay [01:51:42]
Open-ended Learning [01:55:57]
Auto-curriculum & Deep RL [02:08:48]
Robotics & Open-ended Learning [02:31:05]
Learning Potential & MDPs [02:36:20]
Universal Function Space [02:42:02]
Goal-Directed Learning & Auto-Curricula [02:42:48]
Advice & Closing Thoughts [02:44:47]
References:
https://www.springer.com/gp/book/9783319155234
https://arxiv.org/abs/2106.06860
The Case for Strong Emergence (Sabine Hossenfelder)
The Game of Life (Conway)
Toolformer: Teaching Language Models to Generate APIs (Meta AI)
https://arxiv.org/abs/2302.04761
https://arxiv.org/abs/1901.01753
Schmidhuber's Artificial Curiosity
Gödel Machines
PowerPlay
Robust Prioritized Level Replay: https://openreview.net/forum?id=NfZ6g2OmXEk
Unsupervised Environment Design: https://arxiv.org/abs/2012.02096
Excel: Evolving Curriculum Learning for Deep Reinforcement Learning
Go-Explore: A New Approach for Hard-Exploration Problems
Learning with AMIGo: Adversarially Motivated Intrinsic Goals
PRML
Sutton and Barto
https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf