Deception in Finitely Repeated Security Games

Authors

  • Thanh H. Nguyen University of Oregon
  • Yongzhao Wang University of Michigan
  • Arunesh Sinha University of Michigan
  • Michael P. Wellman University of Michigan

DOI:

https://doi.org/10.1609/aaai.v33i01.33012133

Abstract

Allocating resources to defend targets from attack is often complicated by uncertainty about the attacker’s capabilities, objectives, or other underlying characteristics. In a repeated interaction setting, the defender can collect attack data over time to reduce this uncertainty and learn an effective defense. However, a clever attacker can manipulate the attack data to mislead the defender, influencing the learning process toward its own benefit. We investigate strategic deception on the part of an attacker with private type information, who interacts repeatedly with a defender. We present a detailed computation and analysis of both players’ optimal strategies given the attacker may play deceptively. Computational experiments illuminate conditions conducive to strategic deception, and quantify benefits to the attacker. By taking into account the attacker’s deception capacity, the defender can significantly mitigate loss from misleading attack actions.

Downloads

Published

2019-07-17

How to Cite

Nguyen, T. H., Wang, Y., Sinha, A., & Wellman, M. P. (2019). Deception in Finitely Repeated Security Games. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2133-2140. https://doi.org/10.1609/aaai.v33i01.33012133

Issue

Section

AAAI Technical Track: Game Theory and Economic Paradigms