This week Dr. Tim Scarfe, Dr. Keith Duggar and Yannic Kilcher discuss multi-arm bandits and pure exploration with Dr. Wouter M. Koolen, Senior Researcher, Machine Learning group, Centrum Wiskunde & Informatica.
Wouter specialises in machine learning theory, game theory, information theory, statistics and optimisation. Wouter is currently interested in pure exploration in multi-armed bandit models, game tree search, and accelerated learning in sequential decision problems. His research has been cited 1000 times, and he has been published in NeurIPS, the number 1 ML conference 14 times as well as lots of other exciting publications.
Today we are going to talk about two of the most studied settings in control, decision theory, and learning in unknown environment which are the multi-armed bandit (MAB) and reinforcement learning (RL) approaches
when can an agent stop learning and start exploiting using the knowledge it obtained
which strategy leads to minimal learning time
00:00:00 What are multi-arm bandits/show trailer
00:12:55 Show introduction
00:15:50 Bandits
00:18:58 Taxonomy of decision framework approaches
00:25:46 Exploration vs Exploitation
00:31:43 the sharp divide between modes
00:34:12 bandit measures of success
00:36:44 connections to reinforcement learning
00:44:00 when to apply pure exploration in games
00:45:54 bandit lower bounds, a pure exploration renaissance
00:50:21 pure exploration compiler dreams
00:51:56 what would the PX-compiler DSL look like
00:57:13 the long arms of the bandit
01:00:21 causal models behind the curtain of arms
01:02:43 adversarial bandits, arms trying to beat you
01:05:12 bandits as an optimization problem
01:11:39 asymptotic optimality vs practical performance
01:15:38 pitfalls hiding under asymptotic cover
01:18:50 adding features to bandits
01:27:24 moderate confidence regimes
01:30:33 algorithms choice is highly sensitive to bounds
01:46:09 Post script: Keith interesting piece on n quantum
https://www.cwi.nl/research-groups/ma...
#machinelearning