TY - GEN
T1 - Interpretable Deep Learning Models for Single Trial Prediction of Balance Loss
AU - Ravindran, Akshay Sujatha
AU - Cestari, Manuel
AU - Malaya, Christopher
AU - John, Isaac
AU - Francisco, Gerard E.
AU - Layne, Charles
AU - Contreras Vidal, Jose L.
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/10/11
Y1 - 2020/10/11
N2 - Wearable robotic devices are being designed to assist the elderly population and other patients with locomotion disabilities. However, wearable robotics increases the risk from falling. Neuroimaging studies have provided evidence for the involvement of frontocentral and parietal cortices in postural control and this opens up the possibility of using decoders for early detection of balance loss by using electroencephalography (EEG). This study investigates the presence of commonly identified components of the perturbation evoked responses (PEP) when a person is in an exoskeleton. We also evaluated the feasibility of using single-trial EEG to predict the loss of balance using a convolution neural network. Overall, the model achieved a mean 5-fold cross-validation test accuracy of 75.2 % across six subjects with 50% as the chance level. We employed a gradient class activation map-based visualization technique for interpreting the decisions of the CNN and demonstrated that the network learns from PEP components present in these single trials. The high localization ability of Grad-CAM demonstrated here, opens up the possibilities for deploying CNN for ERP/PEP analysis while emphasizing on model interpretability.
AB - Wearable robotic devices are being designed to assist the elderly population and other patients with locomotion disabilities. However, wearable robotics increases the risk from falling. Neuroimaging studies have provided evidence for the involvement of frontocentral and parietal cortices in postural control and this opens up the possibility of using decoders for early detection of balance loss by using electroencephalography (EEG). This study investigates the presence of commonly identified components of the perturbation evoked responses (PEP) when a person is in an exoskeleton. We also evaluated the feasibility of using single-trial EEG to predict the loss of balance using a convolution neural network. Overall, the model achieved a mean 5-fold cross-validation test accuracy of 75.2 % across six subjects with 50% as the chance level. We employed a gradient class activation map-based visualization technique for interpreting the decisions of the CNN and demonstrated that the network learns from PEP components present in these single trials. The high localization ability of Grad-CAM demonstrated here, opens up the possibilities for deploying CNN for ERP/PEP analysis while emphasizing on model interpretability.
UR - http://www.scopus.com/inward/record.url?scp=85098862302&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098862302&partnerID=8YFLogxK
U2 - 10.1109/SMC42975.2020.9283206
DO - 10.1109/SMC42975.2020.9283206
M3 - Conference contribution
AN - SCOPUS:85098862302
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 268
EP - 273
BT - 2020 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2020
Y2 - 11 October 2020 through 14 October 2020
ER -