Abductive Visual Perception with Feature Clouds

David Randell, Mark Witkowski

This paper describes a logical approach to embodied perception and reasoning in the context of Cognitive Robotics that use feature clouds to encode an explicit 3D description of bodies of arbitrary structural complexity. We extend and apply the principles of abductive perception in order to provide robots with an explicit, flexible, and scalable three-dimensional representation of the world for object recognition, localisation and general task execution planning. We show how feature clouds require neither a complex logically formulated geometrical-based description of the intended modelled domain; nor are they necessarily tied to any particular type of feature-detector. Feature clouds provide the means to (i) unify and encode qualitative, quantitative and numerical information as to the position and orientation of objects in space; (ii) encode viewpoint and resolution dependent information; and (iii) when embedded within a hypothetico-deductive reasoning framework, provide the means to integrate psychophysical and other domain-independent constraints.

Subjects: 19.1 Perception; 3.2 Geometric Or Spatial Reasoning

Submitted: Mar 6, 2006


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.