Challenges and Methods in Testing the Remote Agent Planner

Ben Smith, Martin S. Feather, and Nicola Muscettola

The Remote Agent Experiment (RAX) on the Deep Space 1 (DS1) mission was the first time that an artificially intelligent agent controlled a NASA spacecraft. One of the key components of the remote agent is an on-board planner. Since there was no opportunity for human intervention between plan generation and execution, extensive testing was required to ensure that the planner would not endanger the spacecraft by producing an incorrect plan, or by not producing a plan at all. The testing process raised many challenging issues, several of which remain open. The planner and domain model are complex, with billions of possible inputs and outputs. How does one obtain adequate coverage with a reasonable number of test cases? How does one even measure coverage for a planner? How does one determine plan correctness? Other issues arise from developing a planner in the context of a larger operations-oriented project, such as limited workforce and changing domain models, interfaces and requirements. As planning systems are fielded in mission-critical applications, it becomes increasingly important to address these issues. This paper describes the major issues that we encountered while testing the Remote Agent planner, how we addressed them, and what issues remain open.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.