cover of episode On synaptic learning rules for spiking neurons - with Friedemann Zenke - #11

On synaptic learning rules for spiking neurons - with Friedemann Zenke - #11

2024/4/27
logo of podcast Theoretical Neuroscience Podcast

Theoretical Neuroscience Podcast

Frequently requested episodes will be transcribed first

Shownotes Transcript

Today’s AI is largely based on supervised learning of neural networks using the backpropagation-of-error synaptic learning rule. This learning rule relies on differentiation of continuous activation functions and is thus not directly applicable to spiking neurons. Today’s guest has developed the algorithm SuperSpike to address the problem. He has also recently developed a biologically more plausible learning rule based on self-supervised learning. We talk about both.