TY - GEN
T1 - NFT-K
T2 - 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
AU - Alemohammad, Sina
AU - Babaei, Hossein
AU - Barberan, C. J.
AU - Liu, Naiming
AU - Luzi, Lorenzo
AU - Mason, Blake
AU - Baraniuk, Richard G.
N1 - Funding Information:
This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2787, and N00014-20-1-2534; AFOSR grant FA9550-18-1-0478; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.
Publisher Copyright:
© 2022 IEEE
PY - 2022
Y1 - 2022
N2 - Deep neural networks have become essential for numerous applications due to their strong empirical performance such as vision, RL, and classification. Unfortunately, these networks are quite difficult to interpret, and this limits their applicability in settings where interpretability is important for safety, such as medical imaging. One type of deep neural network is neural tangent kernel that is similar to a kernel machine that provides some aspect of interpretability. To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel. We demonstrate the interpretability of this model on two datasets, showing that the multiple kernels model elucidates the interplay between the layers and predictions.
AB - Deep neural networks have become essential for numerous applications due to their strong empirical performance such as vision, RL, and classification. Unfortunately, these networks are quite difficult to interpret, and this limits their applicability in settings where interpretability is important for safety, such as medical imaging. One type of deep neural network is neural tangent kernel that is similar to a kernel machine that provides some aspect of interpretability. To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel. We demonstrate the interpretability of this model on two datasets, showing that the multiple kernels model elucidates the interplay between the layers and predictions.
KW - Deep neural network
KW - interpretability
KW - multi-layer perception
KW - neural tangent kernel
UR - http://www.scopus.com/inward/record.url?scp=85131255873&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131255873&partnerID=8YFLogxK
U2 - 10.1109/ICASSP43922.2022.9747078
DO - 10.1109/ICASSP43922.2022.9747078
M3 - Conference contribution
AN - SCOPUS:85131255873
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 3798
EP - 3802
BT - 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 May 2022 through 27 May 2022
ER -