Font Size:

A Sparse Parameter Learning Method for Probabilistic Logic Programs

Last modified: 2014-06-18

#### Abstract

We propose a new parameter learning algorithm for ProbLog, which is an extension of a logic program that can perform probabilistic inferences. Our algorithm differs from previous parameter learning algorithms for probabilistic logic program (PLP) models on the point that it tries to reduce the number of probabilistic parameters contained in the estimated program. Since the amount of computation required for a probabilistic inference with a ProbLog program can be exponential with respect to the number of probabilistic parameters, programs with fewer parameters are preferable. Our algorithm tries to reduce the number of parameters by adding a penalty term to the objective function, and then minimizing it to encourage the estimated parameters to take either 0 or 1. If a parameter takes value 0 or 1, we can delete the corresponding probabilistic fact, or treat the corresponding fact as a deterministic one, to remove the parameter from the obtained model. We also show that the optimization problem can be solved efficiently by applying the projected gradient method to a compiled knowledge representation. We confirm experimentally that the proposed algorithm is comparable with the state-of-the-art algorithm, while it can reduce the number of probabilistic parameters contained in the estimated program.

Full Text:
PDF