A Monte Carlo Approach to Skill-Based Automated Playtesting

Authors

  • Britton Horn Northeastern University
  • Josh Miller Northeastern University
  • Gillian Smith Worcester Polytechnic Institute
  • Seth Cooper Northeastern University

DOI:

https://doi.org/10.1609/aiide.v14i1.13036

Keywords:

Monte Carlo, Automated Analysis, Human Computation Game

Abstract

In order to create well-crafted learning progressions, designers guide players as they present game skills and give ample time for the player to master those skills. However, analyzing the quality of learning progressions is challenging, especially during the design phase, as content is ever-changing. This research presents the application of Stratabots — automated player simulations based on models of players with varying sets of skills — to the human computation game Foldit. Stratabot performance analysis coupled with player data reveals a relatively smooth learning progression within tutorial levels, yet still shows evidence for improvement. Leveraging existing general gameplaying algorithms such as Monte Carlo Evaluation can reduce the development time of this approach to automated playtesting without losing predicitive power of the player model.

Downloads

Published

2018-09-25

How to Cite

Horn, B., Miller, J., Smith, G., & Cooper, S. (2018). A Monte Carlo Approach to Skill-Based Automated Playtesting. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 14(1), 166-172. https://doi.org/10.1609/aiide.v14i1.13036