Democratic Approximation of Lexicographic Preference Models

Fusun Yaman, Thomas J. Walsh, Michael L. Littman, Marie desJardins

Previous algorithms for learning lexicographic preference models (LPMs) produce a "best guess" LPM that is consistent with the observations. Our approach is more democratic: we do not commit to a single LPM. Instead, we approximate the target using the votes of a collection of consistent LPMs. We present two variations of this method---variable voting and model voting--- and empirically show that these democratic algorithms outperform the existing methods. We also introduce an intuitive yet powerful learning bias to prune some of the possible LPMs, incorporate this bias into our algorithms, and demonstrate its effectiveness when data is scarce.

Subjects: 12. Machine Learning and Discovery; Please choose a second document classification

Submitted: May 16, 2008


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.