TY - GEN
T1 - Adaptive privacy preserving deep learning algorithms for medical data
AU - Zhang, Xinyue
AU - Ding, Jiahao
AU - Wu, Maoqiang
AU - Wong, Stephen T.C.
AU - Van Nguyen, Hien
AU - Pan, Miao
N1 - Funding Information:
The work of Xinyue Zhang, Jiahao Ding and Miao Pan is partly supported by the National Science Foundation (CNS-2029569). The work of Stephen Wong is partly supported by T.T. and W.F. Chao Foundation and John S. Dunn Research Foundation. The work of Hien Van Nguyen is partly supported by the National Science Foundation (1910973).
Publisher Copyright:
© 2021 IEEE.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021/1
Y1 - 2021/1
N2 - Deep learning holds a great promise of revolutionizing healthcare and medicine. Unfortunately, various inference attack models demonstrated that deep learning puts sensitive patient information at risk. The high capacity of deep neural networks is the main reason behind the privacy loss. In particular, patient information in the training data can be unintentionally memorized by a deep network. Adversarial parties can extract that information given the ability to access or query the network. In this paper, we propose a novel privacy-preserving mechanism for training deep neural networks. Our approach adds decaying Gaussian noise to the gradients at every training iteration. This is in contrast to the mainstream approach adopted by Google's TensorFlow Privacy, which employs the same noise scale in each step of the whole training process. Compared to existing methods, our proposed approach provides an explicit closed-form mathematical expression to approximately estimate the privacy loss. It is easy to compute and can be useful when the users would like to decide proper training time, noise scale, and sampling ratio during the planning phase. We provide extensive experimental results using one real-world medical dataset (chest radiographs from the CheXpert dataset) to validate the effectiveness of the proposed approach. The proposed differential privacy based deep learning model achieves significantly higher classification accuracy over the existing methods with the same privacy budget.
AB - Deep learning holds a great promise of revolutionizing healthcare and medicine. Unfortunately, various inference attack models demonstrated that deep learning puts sensitive patient information at risk. The high capacity of deep neural networks is the main reason behind the privacy loss. In particular, patient information in the training data can be unintentionally memorized by a deep network. Adversarial parties can extract that information given the ability to access or query the network. In this paper, we propose a novel privacy-preserving mechanism for training deep neural networks. Our approach adds decaying Gaussian noise to the gradients at every training iteration. This is in contrast to the mainstream approach adopted by Google's TensorFlow Privacy, which employs the same noise scale in each step of the whole training process. Compared to existing methods, our proposed approach provides an explicit closed-form mathematical expression to approximately estimate the privacy loss. It is easy to compute and can be useful when the users would like to decide proper training time, noise scale, and sampling ratio during the planning phase. We provide extensive experimental results using one real-world medical dataset (chest radiographs from the CheXpert dataset) to validate the effectiveness of the proposed approach. The proposed differential privacy based deep learning model achieves significantly higher classification accuracy over the existing methods with the same privacy budget.
UR - http://www.scopus.com/inward/record.url?scp=85105428361&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85105428361&partnerID=8YFLogxK
U2 - 10.1109/WACV48630.2021.00121
DO - 10.1109/WACV48630.2021.00121
M3 - Conference contribution
AN - SCOPUS:85105428361
T3 - Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021
SP - 1168
EP - 1177
BT - Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021
Y2 - 5 January 2021 through 9 January 2021
ER -