Autors:

CiteWeb id: 19960000043

CiteWeb score: 7599

In an earlier paper, we introduced a new "boosting" algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that con- sistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a "pseudo-loss"which is a method for forcing a learning algorithm ofmulti-label conceptstoconcentrateonthelabelsthatarehardest to discriminate. In this paper,we describeexperiments wecarried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. Weperformedtwosetsofexperiments. Thefirstsetcompared boosting to Breiman's "bagging"method when used to aggregate various classifiers (including decision trees and single attribute- value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.

The publication "Experiments with a New Boosting Algorithm" is placed in the Top 10000 of the best publications in CiteWeb. Also in the category Computer Science it is included to the Top 1000. Additionally, the publicaiton "Experiments with a New Boosting Algorithm" is placed in the Top 100 among other scientific works published in 1996.
Links to full text of the publication: