Interleave Variational Optimization with Monte Carlo Sampling: A Tale of Two Approximate Inference Paradigms

Authors

  • Qi Lou University of California, Irvine
  • Rina Dechter University of California, Irvine
  • Alexander Ihler University of California, Irvine

DOI:

https://doi.org/10.1609/aaai.v33i01.33017900

Abstract

Computing the partition function of a graphical model is a fundamental task in probabilistic inference. Variational bounds and Monte Carlo methods, two important approximate paradigms for this task, each has its respective strengths for solving different types of problems, but it is often nontrivial to decide which one to apply to a particular problem instance without significant prior knowledge and a high level of expertise. In this paper, we propose a general framework that interleaves optimization of variational bounds (via message passing) with Monte Carlo sampling. Our adaptive interleaving policy can automatically balance the computational effort between these two schemes in an instance-dependent way, which provides our framework with the strengths of both schemes, leads to tighter anytime bounds and an unbiased estimate of the partition function, and allows flexible tradeoffs between memory, time, and solution quality. We verify our approach empirically on real-world problems taken from recent UAI inference competitions.

Downloads

Published

2019-07-17

How to Cite

Lou, Q., Dechter, R., & Ihler, A. (2019). Interleave Variational Optimization with Monte Carlo Sampling: A Tale of Two Approximate Inference Paradigms. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7900-7907. https://doi.org/10.1609/aaai.v33i01.33017900

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty