Improving Optimization Bounds Using Machine Learning: Decision Diagrams Meet Deep Reinforcement Learning

  • Quentin Cappart Ecole Polytechnique de Montréal
  • Emmanuel Goutierre Ecole Polytechnique
  • David Bergman University of Connecticut
  • Louis-Martin Rousseau Ecole Polytechnique de Montréal

Abstract

Finding tight bounds on the optimal solution is a critical element of practical solution methods for discrete optimization problems. In the last decade, decision diagrams (DDs) have brought a new perspective on obtaining upper and lower bounds that can be significantly better than classical bounding mechanisms, such as linear relaxations. It is well known that the quality of the bounds achieved through this flexible bounding method is highly reliant on the ordering of variables chosen for building the diagram, and finding an ordering that optimizes standard metrics is an NP-hard problem. In this paper, we propose an innovative and generic approach based on deep reinforcement learning for obtaining an ordering for tightening the bounds obtained with relaxed and restricted DDs. We apply the approach to both the Maximum Independent Set Problem and the Maximum Cut Problem. Experimental results on synthetic instances show that the deep reinforcement learning approach, by achieving tighter objective function bounds, generally outperforms ordering methods commonly used in the literature when the distribution of instances is known. To the best knowledge of the authors, this is the first paper to apply machine learning to directly improve relaxation bounds obtained by general-purpose bounding mechanisms for combinatorial optimization problems.

Published
2019-07-17
Section
AAAI Technical Track: Constraint Satisfaction and Optimization