FreezeOut: Accelerate Training by Progressively Freezing Layers

Andrew Brock, Theodore Lim, James Millar Ritchie, Nicholas J. Weston

Research output: Contribution to conferencePaperpeer-review


The early layers of a deep neural net have the fewest parameters, but take up the
most computation. In this extended abstract, we propose to only train the hidden layers for a set portion of the training run, freezing them out one-by-one and excluding them from the backward pass. We empirically demonstrate that FreezeOut yields savings of up to 20% wall-clock time during training with 3% loss in accuracy for DenseNets on CIFAR.
Original languageEnglish
Number of pages7
Publication statusPublished - 8 Dec 2017
EventNIPS 2017 Workshop on Optimization: 10th NIPS Workshop on Optimization for Machine Learning - Long Beach, Long Beach, United States
Duration: 8 Dec 2017 → …
Conference number: 10


WorkshopNIPS 2017 Workshop on Optimization
Abbreviated titleNIPS
Country/TerritoryUnited States
CityLong Beach
Period8/12/17 → …


  • Machine learning
  • Neural network

ASJC Scopus subject areas

  • Artificial Intelligence


Dive into the research topics of 'FreezeOut: Accelerate Training by Progressively Freezing Layers'. Together they form a unique fingerprint.

Cite this