AAAI Publications, The Twenty-Seventh International Flairs Conference

Font Size: 
Learning Probabilistic Relational Models Using Non-Negative Matrix Factorization
Anthony Coutant, Philippe Leray, Hoel Le Capitaine

Last modified: 2014-05-03


Probabilistic Relational Models (PRMs) are directed probabilistic graphical models representing a factored joint distribution over a set of random variables for relational datasets.While regular PRMs define probabilistic dependencies between classes’ descriptive attributes, an extension called PRM with Reference Uncertainty (PRM-RU) allows in addition to manage link uncertainty between them, by adding random variables called selectors. In order to avoid variables with large domains, selectors are associated with partition functions,mapping objects to a set of clusters, and selectors’ distributions are defined over the set of clusters. In PRM-RU, the definition of partition functions constrains us to learn them only from concerned individuals entity attributes and to assign the same cluster to a pair of individuals having the same attributes values. This constraint is actually based on a strong assumption which is not generalizable and can lead to an under usage of relationship data for learning. For these reasons,we relax this constraint in this paper and propose a different partition function learning approach based on relationship data clustering. We empirically show that this approach provides better results than attribute-based learning in the case where relationship topology is independent from involved entity attributes values, and that it gives close results whenever the attributes assumption is correct.


Probabilistic Relational Models; Learning; Clustering; Non-negative Matrix Factorization

Full Text: PDF