C. Baral, L. Floriano, A. Gabaldon, D. Morales, T. Son, R. Watson
One of the agenda behind research in reasoning about actions is to develop autonomous agents (robots) that can act in a dynamic world. The early attempts to use theories of reasoning about actions and planning to formulate a robot control architecture were not successful for several reasons: The early theories based on STRIPS and its extensions allowed only observations about the initial state. A robot control architecture using these theories was usually of the form: (i) make observations (ii) Use the action theory to construct a plan to achieve the goal, and (iii) execute the plan. For such an architecture to work the world must be static so that it does not change during the execution of the plans. This assumption is not valid for a dynamic world where other agents may change the world and/or the robot may not have all the information about the environment when it makes the plan. Moreover, planning is a time consuming activity and it is not usually wise for the robot to spend a lot of time creating a plan, especially when it is supposed to interact with the environment in real time. This led to the development of several robot control architectures that were reactive in nature and usually were based on the paradigm of situated activity which emphasized ongoing physical interaction with the environment as the main aspect in designing autonomous agents. These approaches were quite successful, especially in the domain of mobile robots. But most of them distanced themselves from the traditional approach based on theories of actions.