Minimum Majority Classification and Boosting

Philip M. Long, Genome Institute of Singapore

Motivated by a theoretical analysis of the generalization of boosting, we examine learning algorithms that work by trying to fit data using a simple majority vote over a small number of a collection of hypotheses. We provide experimental evidence that an algorithm based on this principle outputs hypotheses that often generalize nearly as well as those output by boosting, and sometimes better. We also provide experimental evidence for an additional reason that boosting algorithms generalize well, that they take advantage of cases in which there are many simple hypotheses with independent errors.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.