AAAI Publications, Workshops at the Twenty-Seventh AAAI Conference on Artificial Intelligence

Font Size: 
Rates for Inductive Learning of Compositional Models
Adrian Barbu, Maria Pavlovskaia, Song Chun Zhu

Last modified: 2013-06-29


Compositional Models are widely used in Computer Vision as they exhibit strong expressive power by generating a combinatorial number of configurations with a small number of components. However, the literature is still missing a theoretical understanding of why compositional models are better than flat representations, despite empirical evidence as well as strong arguments that compositional models need fewer training examples. In this paper we try to give some theoretical answers in this direction, focusing on AND/OR Graph (AOG) models used in recent literature for representing objects, scenes and events, and bringing the following contributions. First, we analyze the capacity of the space of AND/OR graphs, obtaining PAC (Probably Approximately Correct) bounds for the number of training examples sufficient to guarantee with a given certainty that the model learned has a given accuracy. Second, we propose an algorithm for supervised learning AND/OR Graphs that has theoretical performance guarantees based on the dimensionality and number of training examples. Finally, we observe that part localization, part noise tolerance and part sharing leads to a reduction in the number of training examples required.

Full Text: PDF