Trust and Autonomous Systems
Papers from the 2013 AAAI Spring Symposium
Don Sofge, Geert-Jan Kruijff, William F. Lawless, Program Cochairs
Trust is a key issue in the development and implementation of autonomous systems working with humans. Humans must be able to trust the actions of the machines to want to work with them, and machines must develop or establish trust in the actions of human coworkers to ensure effective collaboration. There is also the issue of autonomous agents, robots and systems trusting one another and humans.
But trust can mean different things in different contexts. For flight control systems on airplanes, trust may mean meeting rigorous criteria regarding structural qualities of the airplane, flightworthiness, and a provably stable control system. In the context of humans interacting with humanoid robots, trust may more closely relate to the interdependence between the human and robot in correctly reading and interpreting each other’s voice commands and gestures and observed actions, and the likelihood that both the robot and human will do what is expected of each other. In the context of an autonomous automobile carrying passengers, trust in the system may be the expectation that the system will respond correctly not only to foreseen road and traffic conditions, but also to unusual circumstances (for example, gridlock; alternative route planning; a child running into the street; running out of gas on the highway; an engine catching fire; hearing a fire engine or ambulance with siren blaring; or a flat tire causing the vehicle to swerve). Interdependent trust also includes system controllers and society. System controllers, human or machine, must be able to control at the individual, group and system levels; and society must be willing to entrust its citizens, including the elderly and young, to the system.
The papers in this report explore the various meaning aspects and meanings of trust between humans and machines in various situational contexts, and the social dynamics of trust in teams or organizations composed of autonomous machines working together with humans. They seek to identify and/or develop methods for engendering trust between humans and autonomous machines, to consider the static and dynamic aspects of trust, and to propose metrics for measuring trust.