Enabling Trust with Behavior Metamodels

Scott A Wallace

Intelligent assistants promise to simplify our lives and increase our productivity. Yet for this promise to become reality, the Artificial Intelligence community will need to address two important issues. The first is how to determine that the assistants we build will, in fact, behave appropriately and safely. The second issue is how to convince society at large that these assistants are useful and reliable tools that should be trusted with important tasks. In this paper, we argue that both of these issues are be addressed by behavior metamodels (i.e., abstract models of how the agent behaves). Our argument is 1) based on experimental evidence of how metamodels can improve debugging/validation efficiency, and 2) based on how metamodels can contribute to three fundamental components of trusting relationships established in previous literature.

Subjects: 9. Foundational Issues; 6. Computer-Human Interaction

Submitted: Jan 26, 2007


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.