Retaining Learned Behavior During Real-Time Neuroevolution

Thomas D’Silva, Roy Janik, Michael Chrien, Kenneth O. Stanley, and Risto Miikkulainen

Creating software-controlled agents in videogames who can learn and adapt to player behavior is a difficult task. Using the real-time NeuroEvolution of Augmenting Topologies (rtNEAT) method for evolving increasingly complex artificial neural networks in real-time has been shown to be an effective way of achieving behaviors beyond simple scripted character behavior. In NERO, a videogame built to showcase the features of rtNEAT, agents are trained in various tasks, including shooting enemies, avoiding enemies, and navigating around obstacles. Training the neural networks to perform a series of distinct tasks can be problematic: the longer they train in a new task, the more likely it is that they will forget their skills. This paper investigates a technique for increasing the probability that a population will remember old skills as they learn new ones. By setting aside the most fit individuals at a time when a skill has been learned and then occasionally introducing their offspring into the population, the skill is retained. How large to make this milestone pool of individuals and how often to insert the offspring of the milestone pool into the general population is the primary focus of this paper.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.