On the Hardness of Probabilistic Inference Relaxations

Authors

  • Supratik Chakraborty Indian Institute of Technology Bombay
  • Kuldeep S. Meel National University of Singapore
  • Moshe Y. Vardi Rice University

DOI:

https://doi.org/10.1609/aaai.v33i01.33017785

Abstract

A promising approach to probabilistic inference that has attracted recent attention exploits its reduction to a set of model counting queries. Since probabilistic inference and model counting are #P-hard, various relaxations are used in practice, with the hope that these relaxations allow efficient computation while also providing rigorous approximation guarantees.

In this paper, we show that contrary to common belief, several relaxations used for model counting and its applications (including probablistic inference) do not really lead to computational efficiency in a complexity theoretic sense. Our arguments proceed by showing the corresponding relaxed notions of counting to be computationally hard. We argue that approximate counting with multiplicative tolerance and probabilistic guarantees of correctness is the only class of relaxations that provably simplifies the problem, given access to an NP-oracle. Finally, we show that for applications that compare probability estimates with a threshold, a new notion of relaxation with gaps between low and high thresholds can be used. This new relaxation allows efficient decision making in practice, given access to an NP-oracle, while also bounding the approximation error.

Erratum: This research is supported in part by the National Research Foundation Singapore under its AI Singapore Programme (Award Number: [AISG-RP-2018-005])

Downloads

Published

2019-07-17

How to Cite

Chakraborty, S., Meel, K. S., & Vardi, M. Y. (2019). On the Hardness of Probabilistic Inference Relaxations. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7785-7792. https://doi.org/10.1609/aaai.v33i01.33017785

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty