Verifiable and Interpretable Reinforcement Learning through Program Synthesis

Authors

  • Abhinav Verma Rice University

DOI:

https://doi.org/10.1609/aaai.v33i01.33019902

Abstract

We study the problem of generating interpretable and verifiable policies for Reinforcement Learning (RL). Unlike the popular Deep Reinforcement Learning (DRL) paradigm, in which the policy is represented by a neural network, the aim of this work is to find policies that can be represented in highlevel programming languages. Such programmatic policies have several benefits, including being more easily interpreted than neural networks, and being amenable to verification by scalable symbolic methods. The generation methods for programmatic policies also provide a mechanism for systematically using domain knowledge for guiding the policy search. The interpretability and verifiability of these policies provides the opportunity to deploy RL based solutions in safety critical environments. This thesis draws on, and extends, work from both the machine learning and formal methods communities.

Downloads

Published

2019-07-17

How to Cite

Verma, A. (2019). Verifiable and Interpretable Reinforcement Learning through Program Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9902-9903. https://doi.org/10.1609/aaai.v33i01.33019902

Issue

Section

Doctoral Consortium Track