cover of episode #038 - Professor Kenneth Stanley - Why Greatness Cannot Be Planned

#038 - Professor Kenneth Stanley - Why Greatness Cannot Be Planned

2021/1/20
logo of podcast Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Frequently requested episodes will be transcribed first

Shownotes Transcript

Professor Kenneth Stanley is currently a research science manager at OpenAI in San Fransisco. We've Been dreaming about getting Kenneth on the show since the very begininning of Machine Learning Street Talk. Some of you might recall that our first ever show was on the enhanced POET paper, of course Kenneth had his hands all over it. He's been cited over 16000 times, his most popular paper with over 3K citations was the NEAT algorithm. His interests are neuroevolution, open-endedness, NNs, artificial life, and AI. He invented the concept of novelty search with no clearly defined objective. His key idea is that there is a tyranny of objectives prevailing in every aspect of our lives, society and indeed our algorithms. Crucially, these objectives produce convergent behaviour and thinking and distract us from discovering stepping stones which will lead to greatness. He thinks that this monotonic objective obsession, this idea that we need to continue to improve benchmarks every year is dangerous. He wrote about this in detail in his recent book "greatness can not be planned" which will be the main topic of discussion in the show. We also cover his ideas on open endedness in machine learning. 

00:00:00 Intro to Kenneth 

00:01:16 Show structure disclaimer 

00:04:16 Passionate discussion 

00:06:26 WHy greatness cant be planned and the tyranny of objectives 

00:14:40 Chinese Finger Trap  

00:16:28 Perverse Incentives and feedback loops 

00:18:17 Deception 

00:23:29 Maze example 

00:24:44 How can we define curiosity or interestingness 

00:26:59 Open endedness 

00:33:01 ICML 2019 and Yannic, POET, first MSLST 

00:36:17 evolutionary algorithms++ 

00:43:18 POET, the first MLST  

00:45:39 A lesson to GOFAI people 

00:48:46 Machine Learning -- the great stagnation 

00:54:34 Actual scientific successes are usually luck, and against the odds -- Biontech 

00:56:21 Picbreeder and NEAT 

01:10:47 How Tim applies these ideas to his life and why he runs MLST 

01:14:58 Keith Skit about UCF 

01:15:13 Main show kick off 

01:18:02 Why does Kenneth value serindipitous exploration so much 

01:24:10 Scientific support for Keneths ideas in normal life 

01:27:12 We should drop objectives to achieve them. An oxymoron? 

01:33:13 Isnt this just resource allocation between exploration and exploitation? 

01:39:06 Are objectives merely a matter of degree? 

01:42:38 How do we allocate funds for treasure hunting in society 

01:47:34 A keen nose for what is interesting, and voting can be dangerous 

01:53:00 Committees are the antithesis of innovation 

01:56:21 Does Kenneth apply these ideas to his real life? 

01:59:48 Divergence vs interestingness vs novelty vs complexity 

02:08:13 Picbreeder 

02:12:39 Isnt everything novel in some sense? 

02:16:35 Imagine if there was no selection pressure? 

02:18:31 Is innovation == environment exploitation? 

02:20:37 Is it possible to take shortcuts if you already knew what the innovations were? 

02:21:11 Go Explore -- does the algorithm encode the stepping stones? 

02:24:41 What does it mean for things to be interestingly different? 

02:26:11 behavioral characterization / diversity measure to your broad interests 

02:30:54 Shaping objectives 

02:32:49 Why do all ambitious objectives have deception? Picbreeder analogy 

02:35:59 Exploration vs Exploitation, Science vs Engineering 

02:43:18 Schools of thought in ML and could search lead to AGI 

02:45:49 Official ending