Changhe Yuan and Marek J. Druzdzel, University of Pittsburgh
Importance sampling-based inference algorithms have shown excellent performance on reasoning tasks in Bayesian networks. In this paper, we argue that all the improvements of these algorithms come from the same source, the improvement in the quality of the importance function. We also explain the requirements that a good importance function should satisfy, namely, it should concentrate its mass on the important parts of the target density and it should possess heavy tails. While the first requirement is subject of a theorem due to Rubinstein, the second requirement is much less understood. We attempt to illustrate why heavy tails are desirable by studying the properties of importance sampling and examining a specific example. The study also leads to a theoretical insight into the desirability of heavy tails for importance sampling in the context of Bayesian networks, which provides a common theoretical basis for several successful heuristic methods.