Learning Grounded Representations
Papers from the AAAI Spring Symposium
Paul R. Cohen and Tim Oates,Cochairs
Technical Report SS-01-05
92 pp., $30.00
ISBN 978-1-57735-139-9
[Add to Cart] [View Cart]
If one takes the view that situated agents require representations, then one is led to ask how representations are learned and how they acquire meanings. These questions are equally interesting to AI researchers, psychologists, philosophers, linguists, and other cognitive scientists; and, of course, they admit many kinds of answers. We do not wish to limit debate or take a doctrinaire position, except to say that this symposium is about learning representations whose meanings are somehow related to the world in which they are grounded. Among the topics that were discussed are the following: (1) Learning algorithms for robots and simulated agents, and learning in infants, to get from sensory data to representations. (2) Identifying relevant sensory information, both across sensors and time. (3) Appropriate learning biases, or prior structure, both domain specific and domain general. (4) Representations that capture the dynamics of interactions with the environment. (5) The acquisition and grounding of ontological distinctions. (6) Learning word meanings, and language learning more generally.