Learning Large Scale Common Sense Models of Everyday Life

William Pentney, Matthai Philipose, Jeff Bilmes, Henry Kautz

Recent work has shown promise in using large, publicly available, hand-contributed commonsense databases as joint models that can be used to infer human state from day-to-day sensor data. The parameters of these models are mined from the web. We show in this paper that learning these parameters using sensor data (with the mined parameters as priors) can improve performance of the models significantly. The primary challenge in learning is scale. Since the model comprises roughly 50,000 irregularly connected nodes in each time slice, it is intractable either to completely label observed data manually or to compute the expected likelihood of even a single time slice. We show how to solve the resulting semi-supervised learning problem by combining a variety of conventional approximation techniques and a novel technique for simplifying the model called context-based pruning. We show empirically that the learned model is substantially better at interpreting sensor data and an detailed analysis of how various techniques contribute to the performance.

Subjects: 5. Common Sense Reasoning; 12. Machine Learning and Discovery

Submitted: Apr 23, 2007


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.