Michael T. Rosenstein and Paul R. Cohen, University of Massachusetts
Autonomous agents make frequent use of knowledge in the form of categories -- categories of objects, human gestures, web pages, and so on. This paper describes a way for agents to learn such categories for themselves through interaction with the environment. In particular, the learning algorithm transforms raw sensor readings into clusters of time series that have predictive value to the agent. We address several issues related to the use of an uninterpreted sensory apparatus and show specific examples where a Pioneer 1 mobile robot interacts with objects in a cluttered laboratory setting.