Douglas A. Reece, Steven A. Shafer
Robots performing complex tasks in rich environments need very good perception modules in order to understand their situation and choose the best action. Robot planning systems have typically assumed that perception was so good that it could refresh the entire world model whenever the planning system needed it, or whenever anything in the world changed. Unfortunately, this assumption is completely unrealistic in many real-world domains because perception is far too difficult. Robots in these domains cannot use the traditional planner paradigm, but instead need a new system design that integrates reasoning with perception. Our research is aimed at showing how a robot can reason about perception, how task knowledge can be used to select perceptual targets, and how this selection dramatically reduces the computational cost of perception.