Benjamin Bell and Daniel McFarlane
Users of complex automation sometimes make mistakes. Vendors and supervisors often place accountability for these mistakes on the users themselves and their limited understanding of how these systems are "supposed" to be used (a perspective that influences how operators are trained and fosters efforts to design systems that explain their actions). Seeking to give users a better understanding of their systems, however, sidesteps the critical issue of how we ought to get computers to shoulder the responsibility of understanding humans. The machine is the piece of this equation that is "supposed" to conform. Our approach has the potential to make what people do naturally be the right thing. This work derives somewhat from related research in Intent Inference, which has focused on representing the tasks and actions of a user so that a system can maintain a representation of the user’s intent and can tailor its decision support accordingly. There is a growing need to extend the notion of intent inference to multi-operator venues, as systems are targeted increasingly at collaborative environments and operators are called upon to perform in multiple roles. In this paper we describe an approach to Crew Intent Inference that relies on models representing the goals and actions of individual operators and the overall team. We focus on the potential for intent inference to enhance coordination and present example scenarios highlighting the utility of intent inference in a multi-operator domain.