A framework for feature selection through boosting

Ahmad Alsahaf*, Nicolai Petkov, Vikram Shenoy, George Azzopardi

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

27 Citations (Scopus)
455 Downloads (Pure)

Abstract

As dimensions of datasets in predictive modelling continue to grow, feature selection becomes increasingly practical. Datasets with complex feature interactions and high levels of redundancy still present a challenge to existing feature selection methods. We propose a novel framework for feature selection that relies on boosting, or sample re-weighting, to select sets of informative features in classification problems. The method uses as its basis the feature rankings derived from fast and scalable tree-boosting models, such as XGBoost. We compare the proposed method to standard feature selection algorithms on 9 benchmark datasets. We show that the proposed approach reaches higher accuracies with fewer features on most of the tested datasets, and that the selected features have lower redundancy.
Original languageEnglish
Article number115895
JournalExpert systems with applications
Volume187
Early online date16-Sept-2021
DOIs
Publication statusPublished - Jan-2022

Keywords

  • Feature selection
  • Boosting
  • Ensemble learning
  • XGBoost

Cite this