Boosting invariance and efficiency in supervised learning

Andrea Vedaldi, Paolo Favaro, Enrico Grisan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper we present a novel boosting algorithm for supervised learning that incorporates invariance to data transformations and has high generalization capabilities. While one can incorporate invariance by adding virtual samples to the data (e.g., by jittering), we adopt a much more efficient strategy and work along the lines of vicinal risk minimization and tangent distance methods. As in vicinal risk minimization, we incorporate invariance to data by applying anisotropic smoothing along the directions of invariance. Moreover, as in tangent distance methods, we provide a simple local approximation to such directions, thus obtaining an efficient computational scheme. We also show that it is possible to automatically design optimal weak classifiers by using gradient descent. To increase efficiency at run time, such optimal weak classifiers are projected on a Haar basis. This results in designing strong classifiers that are more computationally efficient than in the case of exhaustive search. For illustration and validation purposes, we demonstrate the novel algorithm both on synthetic and on real data sets that are publicly available. ©2007 IEEE.

Original languageEnglish
Title of host publicationProceedings of the IEEE International Conference on Computer Vision
DOIs
Publication statusPublished - 2007
Event2007 IEEE 11th International Conference on Computer Vision - Rio de Janeiro, Brazil
Duration: 14 Oct 200721 Oct 2007

Conference

Conference2007 IEEE 11th International Conference on Computer Vision
Abbreviated titleICCV
CountryBrazil
CityRio de Janeiro
Period14/10/0721/10/07

Fingerprint

Supervised learning
Invariance
Classifiers

Cite this

Vedaldi, A., Favaro, P., & Grisan, E. (2007). Boosting invariance and efficiency in supervised learning. In Proceedings of the IEEE International Conference on Computer Vision https://doi.org/10.1109/ICCV.2007.4408840
Vedaldi, Andrea ; Favaro, Paolo ; Grisan, Enrico. / Boosting invariance and efficiency in supervised learning. Proceedings of the IEEE International Conference on Computer Vision. 2007.
@inproceedings{3e54fe3250f94697bdefe7b11f9173bc,
title = "Boosting invariance and efficiency in supervised learning",
abstract = "In this paper we present a novel boosting algorithm for supervised learning that incorporates invariance to data transformations and has high generalization capabilities. While one can incorporate invariance by adding virtual samples to the data (e.g., by jittering), we adopt a much more efficient strategy and work along the lines of vicinal risk minimization and tangent distance methods. As in vicinal risk minimization, we incorporate invariance to data by applying anisotropic smoothing along the directions of invariance. Moreover, as in tangent distance methods, we provide a simple local approximation to such directions, thus obtaining an efficient computational scheme. We also show that it is possible to automatically design optimal weak classifiers by using gradient descent. To increase efficiency at run time, such optimal weak classifiers are projected on a Haar basis. This results in designing strong classifiers that are more computationally efficient than in the case of exhaustive search. For illustration and validation purposes, we demonstrate the novel algorithm both on synthetic and on real data sets that are publicly available. {\circledC}2007 IEEE.",
author = "Andrea Vedaldi and Paolo Favaro and Enrico Grisan",
year = "2007",
doi = "10.1109/ICCV.2007.4408840",
language = "English",
booktitle = "Proceedings of the IEEE International Conference on Computer Vision",

}

Vedaldi, A, Favaro, P & Grisan, E 2007, Boosting invariance and efficiency in supervised learning. in Proceedings of the IEEE International Conference on Computer Vision. 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14/10/07. https://doi.org/10.1109/ICCV.2007.4408840

Boosting invariance and efficiency in supervised learning. / Vedaldi, Andrea; Favaro, Paolo; Grisan, Enrico.

Proceedings of the IEEE International Conference on Computer Vision. 2007.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Boosting invariance and efficiency in supervised learning

AU - Vedaldi, Andrea

AU - Favaro, Paolo

AU - Grisan, Enrico

PY - 2007

Y1 - 2007

N2 - In this paper we present a novel boosting algorithm for supervised learning that incorporates invariance to data transformations and has high generalization capabilities. While one can incorporate invariance by adding virtual samples to the data (e.g., by jittering), we adopt a much more efficient strategy and work along the lines of vicinal risk minimization and tangent distance methods. As in vicinal risk minimization, we incorporate invariance to data by applying anisotropic smoothing along the directions of invariance. Moreover, as in tangent distance methods, we provide a simple local approximation to such directions, thus obtaining an efficient computational scheme. We also show that it is possible to automatically design optimal weak classifiers by using gradient descent. To increase efficiency at run time, such optimal weak classifiers are projected on a Haar basis. This results in designing strong classifiers that are more computationally efficient than in the case of exhaustive search. For illustration and validation purposes, we demonstrate the novel algorithm both on synthetic and on real data sets that are publicly available. ©2007 IEEE.

AB - In this paper we present a novel boosting algorithm for supervised learning that incorporates invariance to data transformations and has high generalization capabilities. While one can incorporate invariance by adding virtual samples to the data (e.g., by jittering), we adopt a much more efficient strategy and work along the lines of vicinal risk minimization and tangent distance methods. As in vicinal risk minimization, we incorporate invariance to data by applying anisotropic smoothing along the directions of invariance. Moreover, as in tangent distance methods, we provide a simple local approximation to such directions, thus obtaining an efficient computational scheme. We also show that it is possible to automatically design optimal weak classifiers by using gradient descent. To increase efficiency at run time, such optimal weak classifiers are projected on a Haar basis. This results in designing strong classifiers that are more computationally efficient than in the case of exhaustive search. For illustration and validation purposes, we demonstrate the novel algorithm both on synthetic and on real data sets that are publicly available. ©2007 IEEE.

UR - http://www.scopus.com/inward/record.url?scp=50649111138&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2007.4408840

DO - 10.1109/ICCV.2007.4408840

M3 - Conference contribution

BT - Proceedings of the IEEE International Conference on Computer Vision

ER -

Vedaldi A, Favaro P, Grisan E. Boosting invariance and efficiency in supervised learning. In Proceedings of the IEEE International Conference on Computer Vision. 2007 https://doi.org/10.1109/ICCV.2007.4408840