Charles Earl and James R. Firby
One drawback of most current reactive systems is their inability to learn from experience. Our research seeks to address this issue by extending the model of reactive planning embodied in the RAP planning and execution architecture to include learning. We describe how Drescher’s Schema learning mechanism, originally designed for unstructured constructivist learning, can be modified to support RAP-like goal-directed reactive behavior. In particular, we have adapted the Schema mechanism to pursue explicitly defined goals and have developed a set of macros for specifying complex initial competencies. We refer to the system as SEAL, for Situated Execution and Learning. In this paper, we present results which demonstrate SEAL’s ability to execute goals and cope with the dynamics of its environment using the Truckworld simulation system as a testbed. We also discuss how SEAL learns new causal knowledge and how it might use that knowledge to extend its initial set of competences.