Abstract : In this paper we apply multi-armed bandits (MABs) to accelerate ADABOOST. ADABOOST constructs a strong classiﬁer in a stepwise fashion by selecting simple base classiﬁers and using their weighted "vote" to determine the ﬁnal classiﬁcation. We model this stepwise base classiﬁer selection as a sequential decision problem, and optimize it with MABs. Each arm represent a subset of the base classiﬁer set. The MAB gradually learns the "utility" of the subsets, and selects one of the subsets in each iteration. ADABOOST then searches only this subset instead of optimizing the base classiﬁer over the whole space. The reward is deﬁned as a function of the accuracy of the base classiﬁer. We investigate how the MAB algorithms (UCB, UCT) can be applied in the case of boosted stumps, trees, and products of base classiﬁers. On benchmark datasets, our bandit-based approach achieves only slightly worse test errors than the standard boosted learners for a computational cost that is an order of magnitude smaller than with standard ADABOOST.