FreezeOut: Accelerate Training by Progressively Freezing Layers

Andrew Brock, Theodore Lim, James Millar Ritchie, Nicholas J. Weston

Research output: Contribution to conferencePaper

Abstract

The early layers of a deep neural net have the fewest parameters, but take up the
most computation. In this extended abstract, we propose to only train the hidden layers for a set portion of the training run, freezing them out one-by-one and excluding them from the backward pass. We empirically demonstrate that FreezeOut yields savings of up to 20% wall-clock time during training with 3% loss in accuracy for DenseNets on CIFAR.
Original languageEnglish
Number of pages7
Publication statusPublished - 8 Dec 2017
EventNIPS 2017 Workshop on Optimization: 10th NIPS Workshop on Optimization for Machine Learning - Long Beach, Long Beach, United States
Duration: 8 Dec 2017 → …
Conference number: 10

Workshop

WorkshopNIPS 2017 Workshop on Optimization
Abbreviated titleNIPS
CountryUnited States
CityLong Beach
Period8/12/17 → …

Keywords

  • Machine learning
  • Neural network

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint Dive into the research topics of 'FreezeOut: Accelerate Training by Progressively Freezing Layers'. Together they form a unique fingerprint.

  • Cite this

    Brock, A., Lim, T., Ritchie, J. M., & Weston, N. J. (2017). FreezeOut: Accelerate Training by Progressively Freezing Layers. Paper presented at NIPS 2017 Workshop on Optimization, Long Beach, United States. http://opt-ml.org/papers/OPT2017_paper_32.pdf