PCGRL: Procedural Content Generation via Reinforcement Learning

Authors

  • Ahmed Khalifa New York University
  • Philip Bontrager New York University
  • Sam Earle New York University
  • Julian Togelius New York University

DOI:

https://doi.org/10.1609/aiide.v16i1.7416

Abstract

We investigate how reinforcement learning can be used to train level-designing agents. This represents a new approach to procedural content generation in games, where level design is framed as a game, and the content generator itself is learned. By seeing the design problem as a sequential task, we can use reinforcement learning to learn how to take the next action so that the expected final level quality is maximized. This approach can be used when few or no examples exist to train from, and the trained generator is very fast. We investigate three different ways of transforming two-dimensional level design problems into Markov decision processes, and apply these to three game environments.

Downloads

Published

2020-10-01

How to Cite

Khalifa, A., Bontrager, P., Earle, S., & Togelius, J. (2020). PCGRL: Procedural Content Generation via Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 16(1), 95-101. https://doi.org/10.1609/aiide.v16i1.7416