The method is compared with AdaBoost and other ensemble methods, showing improved performance on a set of 32 problems from the UCI Machine Learning Repository. In terms of testing error, it obtains results that are significantly better than AdaBoost and random subspace method, using a decision tree as base learner. Furthermore, the robustness of the method in the presence of class label noise is above the results obtained with AdaBoost. A study performed using 魏-error diagrams shows that the proposed method improves the results of boosting by obtaining diverse and more accurate classifiers. The decomposition of testing error into bias and variance terms shows that our method performs better than Bagging in terms of reducing the bias term of the error, and better than AdaBoost in terms of reducing the variance term of the error.