Font Size:

Efficient Methods for Lifted Inference with Aggregate Factors

Last modified: 2011-08-04

#### Abstract

Aggregate factors (that is, those based on aggregate functions such as SUM, AVERAGE, AND etc.) in probabilistic relational models can compactly represent dependencies among a large number of relational random variables. However, propositional inference on a factor aggregating

*n k*-valued random variables into an*r*-valued result random variable is*O*(*r k*2*). Lifted methods can ameliorate this to*^{n}*O*(*r n*) in general and^{k}*O*(*r k*log*n*) for commutative associative aggregators. In this paper, we propose (a) an exact solution constant in*n*when*k*= 2 for certain aggregate operations such as AND, OR and SUM, and (b) a close approximation for inference with aggregate factors with time complexity constant in*n*. This approximate inference involves an analytical solution for some operations when*k*> 2. The approximation is based on the fact that the typically used aggregate functions can be represented by linear constraints in the standard (*k*–1)-simplex in R*where*^{k}*k*is the number of possible values for random variables. This includes even aggregate functions that are commutative but not associative (e.g., the MODE operator that chooses the most frequent value). Our algorithm takes polynomial time in k (which is only 2 for binary variables) regardless of*r*and*n,*and the error decreases as n increases. Therefore, for most applications (in which a close approximation suffices) our algorithm is a much more efficient solution than existing algorithms. We present experimental results supporting these claims. We also present a (c) third contribution which further optimizes aggregations over multiple groups of random variables with distinct distributions.
Full Text:
PDF