cover of episode "Models Don't 'Get Reward'" by Sam Ringer

"Models Don't 'Get Reward'" by Sam Ringer

2023/1/12
logo of podcast LessWrong (Curated & Popular)

LessWrong (Curated & Popular)

Frequently requested episodes will be transcribed first

Shownotes Transcript

https://www.lesswrong.com/posts/TWorNr22hhYegE4RT/models-don-t-get-reward)Crossposted from the AI Alignment Forum). May contain more technical jargon than usual.

In terms of content, this has a lot of overlap with *Reward is not the optimization target)*. I'm basically rewriting a part of that post in language I personally find clearer, emphasising what I think is the core insight

When thinking about deception and RLHF training, a simplified threat model is something like this:

  • A model takes some actions.
  • If a human approves of these actions, the human gives the model some reward.
  • Humans can be deceived into giving reward in situations where they would otherwise not if they had more knowledge.
  • Models will take advantage of this so they can get more reward.
  • Models will therefore become deceptive.

Before continuing, I would encourage you to really engage with the above. Does it make sense to you? Is it making any hidden assumptions? Is it missing any steps? Can you rewrite it to be more mechanistically correct?

I believe that when people use the above threat model, they are either using it as shorthand for something else or they misunderstand how reinforcement learning works. Most alignment researchers will be in the former category. However, I was in the latter.