Persistent Assistants: Living and Working with AI
Papers from the AAAI Spring Symposium
Daniel Shapiro, Pauline Berry, John Gersh, and Nathan Schurr, Cochairs
Consider a future in which intelligent agents play a significant role in our personal and professional lives: smart houses will anticipate our actions and needs while personalized agents will tailor our entertainment to our preferences, purchase goods for us on-line, monitor our health, and even drive us to the store. At work, agents will assist us in tasks ranging from organizing meetings to ensuring safety in complicated and stressful situations like operating nuclear power plants or conducting space missions. The agents might be robotic or be software processes, and they might act individually or in collections. Despite this breadth, these examples have two unifying features: they call on us to delegate decisions or actions to agents whose behavior will materially affect our interests or well-being, and they require a close partnership between users and agents over an extended period of time in order to get the job done.
What will it take to enable this future? Effective assistants will need significant new capabilities to interact with and understand people in ways that we don’t yet fully understand. Moreover, the attempt to construct such persistent assistants will raise several broad questions at the intersection of the fields of autonomous systems and human centered computing: How does the context of persistent assistance shape user-agent interaction? What requirements do particular tasks and user populations impose? How can people and assistants communicate changing intentions, goals, and tasks to each other? How is trust developed and maintained? What is the tradeoff between predictable behavior and adjustable autonomy? How will persistent assistants affect people’s social and interpersonal relations? How should mixed teams of many people and many agents interact?