[R] Layer rotation: a surprisingly powerful indicator of generalization in deep networks?
Sharing our latest work presented at the ICML workshop "Identifying and Understanding Deep Learning Phenomena":
*Layer rotation: a surprisingly powerful indicator of generalization in deep networks?* ([arxiv link](https://arxiv.org/abs/1806.01603v2))
We're pretty excited about it: we really believe layer rotation (the metric we study) is somehow related to a fundamental aspect of deep learning, and that it is worth much more investigation. For the moment, our work demonstrates that layer rotation's relation with generalization exhibits a remarkable
* consistency : a rule of thumb that is widely applicable, explaining ***differences of up to 30% test accuracy***,
* simplicity : ***a network-independent optimum w.r.t. generalization***, and
* explanatory power: ***novel insights around widely used techniques*** (weight decay, adaptive gradient methods, learning rate warmups,...).
We also provide preliminary evidence that layer rotations correlate with the degree to which intermediate features are learned during the training procedure.
Since we also provide tools to monitor and control layer rotation during training, our work could also greatly reduce the current hyperparameter tuning struggle. Code available! [Here](https://github.com/ispgroupucl/layer-rotation-paper-experiments) and [here](https://github.com/ispgroupucl/layer-rotation-tools).
Looking forward to your feedback!
**Abstract**:
Our work presents extensive empirical evidence that layer rotation, i.e. the evolution across training of the cosine distance between each layer's weight vector and its initialization, constitutes an
/r/MachineLearning
https://redd.it/c89lif