A framework for feature selection through boosting

Ahmad Alsahaf*, Nicolai Petkov, Vikram Shenoy, George Azzopardi

*Bijbehorende auteur voor dit werk

OnderzoeksoutputAcademicpeer review

72 Citaten (Scopus)
659 Downloads (Pure)


As dimensions of datasets in predictive modelling continue to grow, feature selection becomes increasingly practical. Datasets with complex feature interactions and high levels of redundancy still present a challenge to existing feature selection methods. We propose a novel framework for feature selection that relies on boosting, or sample re-weighting, to select sets of informative features in classification problems. The method uses as its basis the feature rankings derived from fast and scalable tree-boosting models, such as XGBoost. We compare the proposed method to standard feature selection algorithms on 9 benchmark datasets. We show that the proposed approach reaches higher accuracies with fewer features on most of the tested datasets, and that the selected features have lower redundancy.
Originele taal-2English
TijdschriftExpert systems with applications
Vroegere onlinedatum16-sep.-2021
StatusPublished - jan.-2022

Citeer dit