Towards an Embodied and Situated AI

Artur M. Arsenio

Real-time perception through experimental manipulation is developed using the robot arm to facilitate perception, or else exploiting human/robot social interactions (such as with a caregiver) so that the human changes the world in which it is situated, enhancing the robot’s perceptions. Contrary to standard supervised learning techniques relying on a-priori availability of training data segmented manually, actions by an embodied agent are used to automatically generate training data for the learning mechanisms, so that the robot develops categorization autonomously. This framework is demonstrated to apply naturally to a large spectrum of computer vision problems: object segmentation, visual and cross-modal object recognition, object depth extraction and localization from monocular contextual cues, and learning from visual aids — such as books. The theory is corroborated by experimental results.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.