Certifiable Trust in Autonomous Systems: Making the Intractable Tangible

Authors

  • Joseph B. Lyons Air Force Research Laboratory
  • Matthew A. Clark Air Force Research Laboratory
  • Alan R. Wagner
  • Matthew J. Schuelke SRA International

DOI:

https://doi.org/10.1609/aimag.v38i3.2717

Abstract

This article discusses verification and validation (V&V) of autonomous systems, a concept that will prove to be difficult for systems that were designed to execute decision initiative. V&V of such systems should include evaluations of the trustworthiness of the system based on transparency inputs and scenario-based training. Transparency facets should be used to establish shared awareness and shared intent between the designer, tester, and user of the system. The transparency facets will allow the human to understand the goals, social intent, contextual awareness, task limitations, analytical underpinnings, and team-based orientation of the system in an attempt to verify its trustworthiness. Scenario-based training can then be used to validate that programming in a variety of situations that test the behavioral repertoire of the system. This novel method should be used to analyze behavioral adherence to a set of governing principles coded into the system.

Downloads

Published

2017-10-02

How to Cite

Lyons, J. B., Clark, M. A., Wagner, A. R., & Schuelke, M. J. (2017). Certifiable Trust in Autonomous Systems: Making the Intractable Tangible. AI Magazine, 38(3), 37-49. https://doi.org/10.1609/aimag.v38i3.2717

Issue

Section

Articles