Abstract:
Reinforcement Learning (RL) has gained significant traction as a powerful paradigm for solving sequential decision-making problems. This includes exceptional progress in autonomous vehicles [11, 17], algorithmic trading (finance) [5], and gaming engines [13, 14]. However, training and deploying RL-based agents in real-world scenarios often requires addressing safety constraints, as failure to adhere to these constraints can lead to catastrophic consequences. For example, while learning to hover a helicopter over a specific area, choosing a series of ‘bad’ actions may cause it to crash. Safe Reinforcement Learning (Safe RL) is a promising model that aims to produce agents that operate within predefined safety bounds while optimizing performance. This report lays down in detail different formulations of the Safe RL problem, methods of quantifying safety in reinforcement learning, and their tractable solutions that perform optimally while adhering to their predefined safety constraints. Moreover, we explore in depth the strategies of understanding and formalizing the methods of proving performance bounds on RL algorithms. With the findings of this project, we aim to contribute to advancing the theoretical and practical understanding of Safe RL, paving the way for its adoption in high-stakes domains requiring robust decision-making under constraints, e.g. in the domain of safe autonomous driving.