Automatic Depiction of Spatial Descriptions

Patrick Olivier, Toshiyuki Maeda, Jun-ichi Tsujii

A novel combination of ideas from cognitive linguistics and spatial occupancy models in robotics has led to the WIP (Words Into Pictures) system. WIP automatically generates depictions of natural language descriptions of indoor scenes. A qualitative layer in the conceptual representation of objects underlies a mechanism by which alternative depictions arise for qualitatively distinct interpretations, as often occurs as a result of deictic/intrinsic reference frame ambiguity. At the same time, a quantitative layer, in conjunction with a potential field model of the semantics of projective prepositions, is used in the process of capturing the inherently fuzzy character of the meaning of natural language spatial predications.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.