Towards More Human-Like Computer Opponents

Michael Freed, Travis Bear, Herrick Goldman, Geoffrey Hyatt, Paul Reber and Josh Tauber

Current-generation online games typically incorporate a "computer" opponent to train new players to compete against human opponents. The quality of this training depends to a large degree on how similar the computer’s play is to that of an experienced human player. For instance, inhuman weaknesses in computer play encourage new players to develop tactics, prediction rules and playing styles that will be ineffective against people. Game designers often compensate for weaknesses in the computer’s play by providing it with superhuman capabilities such as omniscience. However, such abilities render otherwise important tactics ineffective and thus discourage players from developing useful skills. These differences are especially pronounced in "real-time strategy" games such as Starcraft where tactics are often designed to take advantage of specific human limitations. An informal survey of experienced Starcraft players reveals numerous play-critical differences between human and computer performance. In this paper, we identify several of these differences, and then discuss a prototyping tool for constructing appropriately humanlike software agents.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.