Tong Zhang
Neural Computation
A boosting algorithm seeks to minimize empirically a loss function in a greedy fashion. The resulted estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. This paper studies convergence of boosting when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting's greedy optimization to the infinimum of the loss function over the linear span. As a side product, these results reveal the importance of restricting the greedy search step sizes, as known in practice through the works of Friedman and others.
Tong Zhang
Neural Computation
Jinbo Bi, Tong Zhang, et al.
KDD 2004
Vijay S. Iyengar, Chidanand Apte, et al.
KDD 2000
Rie Kubota Ando, Mark Dredze, et al.
TREC 2005