TY - JOUR
T1 - Deep Autoencoders
T2 - 2nd Mathematical and Scientific Machine Learning Conference, MSML 2021
AU - Cosentino, Romain
AU - Balestriero, Randall
AU - Baraniuk, Richard
AU - Aazhang, Behnaam
N1 - Funding Information:
Acknowledgments:A special thanks to Anirvan Segupta and Yanis Barhoun for their insights and discussions. RC and BA are supported by NSF grant SCH-1838873 and NIH grant R01HL144683-CFDA. Both RB are supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2787, and N00014-20-1-2534; AFOSR grant FA9550-18-1-0478; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.
Publisher Copyright:
© 2021 R. Cosentino, R. Balestriero, R. Baraniuk & B. Aazhang.
PY - 2021
Y1 - 2021
N2 - A big mystery in deep learning continues to be the ability of methods to generalize when the number of model parameters is larger than the number of training examples. In this work, we take a step towards a better understanding of the underlying phenomena of Deep Autoencoders (AEs), a mainstream deep learning solution for learning compressed, interpretable, and structured data representations. In particular, we interpret how AEs approximate the data manifold by exploiting their continuous piecewise affine structure. Our reformulation of AEs provides new insights into their mapping, reconstruction guarantees, as well as an interpretation of commonly used regularization techniques. We leverage these findings to derive two new regularizations that enable AEs to capture the inherent symmetry in the data. Our regularizations leverage recent advances in the group of transformation learning to enable AEs to better approximate the data manifold without explicitly defining the group underlying the manifold. Under the assumption that the symmetry of the data can be explained by a Lie group, we prove that the regularizations ensure the generalization of the corresponding AEs. A range of experimental evaluations demonstrate that our methods outperform other state-of-the-art regularization techniques.
AB - A big mystery in deep learning continues to be the ability of methods to generalize when the number of model parameters is larger than the number of training examples. In this work, we take a step towards a better understanding of the underlying phenomena of Deep Autoencoders (AEs), a mainstream deep learning solution for learning compressed, interpretable, and structured data representations. In particular, we interpret how AEs approximate the data manifold by exploiting their continuous piecewise affine structure. Our reformulation of AEs provides new insights into their mapping, reconstruction guarantees, as well as an interpretation of commonly used regularization techniques. We leverage these findings to derive two new regularizations that enable AEs to capture the inherent symmetry in the data. Our regularizations leverage recent advances in the group of transformation learning to enable AEs to better approximate the data manifold without explicitly defining the group underlying the manifold. Under the assumption that the symmetry of the data can be explained by a Lie group, we prove that the regularizations ensure the generalization of the corresponding AEs. A range of experimental evaluations demonstrate that our methods outperform other state-of-the-art regularization techniques.
KW - Affine Spline Deep Network
KW - Deep Autoencoders
KW - Deep Network
KW - Generalization
KW - Group Equivariant Network
KW - Higher-order Regularization
KW - Interpolation
KW - Interpretability
KW - Lie Algebra
KW - Lie Group
KW - Orbit
KW - Partitioning
KW - Piecewise Affine Deep Network
KW - Piecewise Linear Deep Network
KW - Regression
UR - http://www.scopus.com/inward/record.url?scp=85130832389&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85130832389&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85130832389
SN - 2640-3498
VL - 145
SP - 197
EP - 222
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 16 August 2021 through 19 August 2021
ER -