TY - JOUR

T1 - The geometry of deep networks

T2 - 33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019

AU - Balestriero, Randall

AU - Cosentino, Romain

AU - Aazhang, Behnaam

AU - Baraniuk, Richard G.

N1 - Funding Information:
RB and RGB were supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571 and N00014-17-1-2551; AFOSR grant FA9550-18-1-0478; DARPA grant G001534-7500; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047. RC and BA were supported by NSF grant SCH-1838873 and NIH grant R01HL144683-CFDA.
Publisher Copyright:
© 2019 Neural information processing systems foundation. All rights reserved.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.

PY - 2019

Y1 - 2019

N2 - We study the geometry of deep (neural) networks (DNs) with piecewise affine and convex nonlinearities. The layers of such DNs have been shown to be max-affine spline operators (MASOs) that partition their input space and apply a region-dependent affine mapping to their input to produce their output. We demonstrate that each MASO layer's input space partition corresponds to a power diagram (an extension of the classical Voronoi tiling) with a number of regions that grows exponentially with respect to the number of units (neurons). We further show that a composition of MASO layers (e.g., the entire DN) produces a progressively subdivided power diagram and provide its analytical form. The subdivision process constrains the affine maps on the potentially exponentially many power diagram regions with respect to the number of neurons to greatly reduce their complexity. For classification problems, we obtain a formula for the DN's decision boundary in the input space plus a measure of its curvature that depends on the DN's architecture, nonlinearities, and weights. Numerous numerical experiments support and extend our theoretical results.

AB - We study the geometry of deep (neural) networks (DNs) with piecewise affine and convex nonlinearities. The layers of such DNs have been shown to be max-affine spline operators (MASOs) that partition their input space and apply a region-dependent affine mapping to their input to produce their output. We demonstrate that each MASO layer's input space partition corresponds to a power diagram (an extension of the classical Voronoi tiling) with a number of regions that grows exponentially with respect to the number of units (neurons). We further show that a composition of MASO layers (e.g., the entire DN) produces a progressively subdivided power diagram and provide its analytical form. The subdivision process constrains the affine maps on the potentially exponentially many power diagram regions with respect to the number of neurons to greatly reduce their complexity. For classification problems, we obtain a formula for the DN's decision boundary in the input space plus a measure of its curvature that depends on the DN's architecture, nonlinearities, and weights. Numerous numerical experiments support and extend our theoretical results.

UR - http://www.scopus.com/inward/record.url?scp=85088863048&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85088863048&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85088863048

VL - 32

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

Y2 - 8 December 2019 through 14 December 2019

ER -