Lonnie Chrisman, Reid Simmons
A primary problem facing real-world robots is the question of which sensing actions should be performed at any given time. It is important that an agent be economical with its allocation of sensing when sensing is expensive or when there are many possible sensing operations available. Sensing is rational when the expected utility from the information obtained outweighs the execution cost of the sensing operation itself. This paper outlines an approach to the efficient construction of plans containing explicit sensing operations with the objective of finding nearly optimal cost effective plans with respect to both action and sensing. The scheduling of sensing operations, in addition to the usual scheduling of physical actions, potentially results in an enornous increase in the computational complexity of planning. Our approach avoids this pitfall through strict adherence to a static sensing policy. The approach, based upon the Markov Decision Process paradigm, handles a significant amount of uncertainty in the outcomes of actions.