Reinforcement of Local Pattern Cases for Playing Tetris

Houcine Romdhane, Luc Lamontagne

In the paper, we investigate the use of reinforcement learning in CBR for estimating and managing a legacy case base for playing the game of Tetris. Each case corresponds to a local pattern describing the relative height of a subset of columns where pieces could be placed. We evaluate these patterns through reinforcement learning to determine if significant performance improvement can be observed. For estimating the values of the patterns, we compare Q-learning with a simpler temporal difference formulation. Our results indicate that training without discounting provides slightly better results than other evaluation schemes. We also explore how the reinforcement values of the patterns can help reduce the size of the case base. We report on experiments we conducted for forgetting cases.

Subjects: 3.1 Case-Based Reasoning; 12.1 Reinforcement Learning

Submitted: Feb 23, 2008

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.