Boosting feature selection

D. B. Redpath, K. Lebart

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)

Abstract

It is possible to reduce the error rate of a single classifier using a classifier ensemble. However, any gain in performance is undermined by the increased computation of performing classification several times. Here the AdaboostFS algorithm is proposed which builds on two popular areas of ensemble research: Adaboost and Ensemble Feature Selection (EFS). The aim of AdaboostFSs is to reduce the number of features used by each base classifer and hence the overall computation required by the ensemble. To do this the algorithm combines a regularised version of Boosting AdaboostReg [1] with a floating feature search for each base classifier. Adaboost FS is compared using four benchmark data sets to Adaboost All, which uses all features and to AdaboostRSM, which uses a random selection of features. Performance is assessed based on error rate, ensemble error and diversity, and the total number of features used for classification. Results show that AdaboostFS achieves a lower error rate and higher diversity than AdaboostAll, and achieves a lower error rate and comparable diversity to AdaboostRSM However, over the other methods AdaboostFS produces a significant reduction in the number of features required for classification in each base classifier and the entire ensemble. © Springer-Verlag Berlin Heidelberg 2005.

Original languageEnglish
Title of host publicationPattern Recognition and Data Mining
Subtitle of host publicationThird International Conference on Advances in Pattern Recognition, ICAPR 2005, Bath, UK, August 22-25, 2005, Proceedings, Part I
Pages305-314
Number of pages10
Volume3686
ISBN (Electronic)978-3-540-28758-2
DOIs
Publication statusPublished - 2005
EventThird International Conference on Advances in Patten Recognition - Bath, United Kingdom
Duration: 22 Aug 200525 Aug 2005

Publication series

NameLecture Notes in Computer Science
Volume3686
ISSN (Print)0302-9743

Conference

ConferenceThird International Conference on Advances in Patten Recognition
Abbreviated titleICAPR 2005
CountryUnited Kingdom
CityBath
Period22/08/0525/08/05

Fingerprint

Feature extraction
Adaptive boosting
Classifiers

Cite this

Redpath, D. B., & Lebart, K. (2005). Boosting feature selection. In Pattern Recognition and Data Mining: Third International Conference on Advances in Pattern Recognition, ICAPR 2005, Bath, UK, August 22-25, 2005, Proceedings, Part I (Vol. 3686, pp. 305-314). (Lecture Notes in Computer Science; Vol. 3686). https://doi.org/10.1007/11551188_33
Redpath, D. B. ; Lebart, K. / Boosting feature selection. Pattern Recognition and Data Mining: Third International Conference on Advances in Pattern Recognition, ICAPR 2005, Bath, UK, August 22-25, 2005, Proceedings, Part I. Vol. 3686 2005. pp. 305-314 (Lecture Notes in Computer Science).
@inbook{dd7b8ce5041d48838a92862688bcd28c,
title = "Boosting feature selection",
abstract = "It is possible to reduce the error rate of a single classifier using a classifier ensemble. However, any gain in performance is undermined by the increased computation of performing classification several times. Here the AdaboostFS algorithm is proposed which builds on two popular areas of ensemble research: Adaboost and Ensemble Feature Selection (EFS). The aim of AdaboostFSs is to reduce the number of features used by each base classifer and hence the overall computation required by the ensemble. To do this the algorithm combines a regularised version of Boosting AdaboostReg [1] with a floating feature search for each base classifier. Adaboost FS is compared using four benchmark data sets to Adaboost All, which uses all features and to AdaboostRSM, which uses a random selection of features. Performance is assessed based on error rate, ensemble error and diversity, and the total number of features used for classification. Results show that AdaboostFS achieves a lower error rate and higher diversity than AdaboostAll, and achieves a lower error rate and comparable diversity to AdaboostRSM However, over the other methods AdaboostFS produces a significant reduction in the number of features required for classification in each base classifier and the entire ensemble. {\circledC} Springer-Verlag Berlin Heidelberg 2005.",
author = "Redpath, {D. B.} and K. Lebart",
year = "2005",
doi = "10.1007/11551188_33",
language = "English",
isbn = "978-3-540-28757-5",
volume = "3686",
series = "Lecture Notes in Computer Science",
pages = "305--314",
booktitle = "Pattern Recognition and Data Mining",

}

Redpath, DB & Lebart, K 2005, Boosting feature selection. in Pattern Recognition and Data Mining: Third International Conference on Advances in Pattern Recognition, ICAPR 2005, Bath, UK, August 22-25, 2005, Proceedings, Part I. vol. 3686, Lecture Notes in Computer Science, vol. 3686, pp. 305-314, Third International Conference on Advances in Patten Recognition, Bath, United Kingdom, 22/08/05. https://doi.org/10.1007/11551188_33

Boosting feature selection. / Redpath, D. B.; Lebart, K.

Pattern Recognition and Data Mining: Third International Conference on Advances in Pattern Recognition, ICAPR 2005, Bath, UK, August 22-25, 2005, Proceedings, Part I. Vol. 3686 2005. p. 305-314 (Lecture Notes in Computer Science; Vol. 3686).

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)

TY - CHAP

T1 - Boosting feature selection

AU - Redpath, D. B.

AU - Lebart, K.

PY - 2005

Y1 - 2005

N2 - It is possible to reduce the error rate of a single classifier using a classifier ensemble. However, any gain in performance is undermined by the increased computation of performing classification several times. Here the AdaboostFS algorithm is proposed which builds on two popular areas of ensemble research: Adaboost and Ensemble Feature Selection (EFS). The aim of AdaboostFSs is to reduce the number of features used by each base classifer and hence the overall computation required by the ensemble. To do this the algorithm combines a regularised version of Boosting AdaboostReg [1] with a floating feature search for each base classifier. Adaboost FS is compared using four benchmark data sets to Adaboost All, which uses all features and to AdaboostRSM, which uses a random selection of features. Performance is assessed based on error rate, ensemble error and diversity, and the total number of features used for classification. Results show that AdaboostFS achieves a lower error rate and higher diversity than AdaboostAll, and achieves a lower error rate and comparable diversity to AdaboostRSM However, over the other methods AdaboostFS produces a significant reduction in the number of features required for classification in each base classifier and the entire ensemble. © Springer-Verlag Berlin Heidelberg 2005.

AB - It is possible to reduce the error rate of a single classifier using a classifier ensemble. However, any gain in performance is undermined by the increased computation of performing classification several times. Here the AdaboostFS algorithm is proposed which builds on two popular areas of ensemble research: Adaboost and Ensemble Feature Selection (EFS). The aim of AdaboostFSs is to reduce the number of features used by each base classifer and hence the overall computation required by the ensemble. To do this the algorithm combines a regularised version of Boosting AdaboostReg [1] with a floating feature search for each base classifier. Adaboost FS is compared using four benchmark data sets to Adaboost All, which uses all features and to AdaboostRSM, which uses a random selection of features. Performance is assessed based on error rate, ensemble error and diversity, and the total number of features used for classification. Results show that AdaboostFS achieves a lower error rate and higher diversity than AdaboostAll, and achieves a lower error rate and comparable diversity to AdaboostRSM However, over the other methods AdaboostFS produces a significant reduction in the number of features required for classification in each base classifier and the entire ensemble. © Springer-Verlag Berlin Heidelberg 2005.

UR - http://www.scopus.com/inward/record.url?scp=27244449186&partnerID=8YFLogxK

U2 - 10.1007/11551188_33

DO - 10.1007/11551188_33

M3 - Chapter (peer-reviewed)

SN - 978-3-540-28757-5

VL - 3686

T3 - Lecture Notes in Computer Science

SP - 305

EP - 314

BT - Pattern Recognition and Data Mining

ER -

Redpath DB, Lebart K. Boosting feature selection. In Pattern Recognition and Data Mining: Third International Conference on Advances in Pattern Recognition, ICAPR 2005, Bath, UK, August 22-25, 2005, Proceedings, Part I. Vol. 3686. 2005. p. 305-314. (Lecture Notes in Computer Science). https://doi.org/10.1007/11551188_33