Paper: https://arxiv.org/pdf/2408.04498)
This research introduces Model-Based Transfer Learning (MBTL), a novel framework for improving the efficiency and robustness of deep reinforcement learning (RL) in contextual Markov Decision Processes (CMDPs). MBTL strategically selects training tasks to maximize generalization performance across a range of tasks by modeling both the performance set point using Gaussian processes and the generalization gap as a function of contextual similarity. Theoretical analysis proves sublinear regret, and experiments on urban traffic and continuous control benchmarks demonstrate significant sample efficiency improvements (up to 50x) compared to traditional methods. The method's effectiveness is shown to be relatively insensitive to the underlying RL algorithm and hyperparameters.
ai , model , mit, genai, generativeai, artificialintelligence , arxiv , research , paper , publication, reinforcement learning, rl , ml