TY - JOUR
T1 - FaceEngage
T2 - Robust Estimation of Gameplay Engagement from User-Contributed (YouTube) Videos
AU - Chen, Xu
AU - Niu, Li
AU - Veeraraghavan, Ashok
AU - Sabharwal, Ashutosh
N1 - Publisher Copyright:
© 2010-2012 IEEE.
PY - 2022
Y1 - 2022
N2 - Measuring user engagement in interactive tasks can facilitate numerous applications toward optimizing user experience, ranging from eLearning to gaming. However, a significant challenge is the lack of non-contact engagement estimation methods that are robust in unconstrained environments. We present FaceEngage, a non-intrusive engagement estimator leveraging user facial recordings during actual gameplay in naturalistic conditions. Our contributions are three-fold. First, we show the potential of using front-facing videos as training data to build the engagement estimator. We compile FaceEngage Dataset with over 700 picture-in-picture, realisitic, and user-contributed YouTube gaming videos (i.e., with both full-screen game scenes and time-synchronized user facial recordings in subwindows). Second, we develop FaceEngage system, that captures relevant gamer facial features from front-facing recordings to infer task engagement. We implement two FaceEngage pipelines: an estimator trained on user facial motion features inspired by prior psychological works, and a deep learning-enabled estimator. Lastly, we conduct extensive experiments and conclude: (i) certain user facial motion cues (e.g., blink rates, head movements) are engagement-indicative; (ii) our deep learning-enabled FaceEngage pipeline can automatically extract more informative features, outperforming the facial motion feature-based pipeline; (iii) FaceEngage is robust to various video lengths, users/game genres and interpretable. Despite the challenging nature of realistic videos, FaceEngage attains the accuracy of 83.8 percent and leave-one-user-out precision of 79.9 percent, both of which are superior to our face motion-based model.
AB - Measuring user engagement in interactive tasks can facilitate numerous applications toward optimizing user experience, ranging from eLearning to gaming. However, a significant challenge is the lack of non-contact engagement estimation methods that are robust in unconstrained environments. We present FaceEngage, a non-intrusive engagement estimator leveraging user facial recordings during actual gameplay in naturalistic conditions. Our contributions are three-fold. First, we show the potential of using front-facing videos as training data to build the engagement estimator. We compile FaceEngage Dataset with over 700 picture-in-picture, realisitic, and user-contributed YouTube gaming videos (i.e., with both full-screen game scenes and time-synchronized user facial recordings in subwindows). Second, we develop FaceEngage system, that captures relevant gamer facial features from front-facing recordings to infer task engagement. We implement two FaceEngage pipelines: an estimator trained on user facial motion features inspired by prior psychological works, and a deep learning-enabled estimator. Lastly, we conduct extensive experiments and conclude: (i) certain user facial motion cues (e.g., blink rates, head movements) are engagement-indicative; (ii) our deep learning-enabled FaceEngage pipeline can automatically extract more informative features, outperforming the facial motion feature-based pipeline; (iii) FaceEngage is robust to various video lengths, users/game genres and interpretable. Despite the challenging nature of realistic videos, FaceEngage attains the accuracy of 83.8 percent and leave-one-user-out precision of 79.9 percent, both of which are superior to our face motion-based model.
KW - deep neural networks
KW - Engagement estimation
KW - video analysis
UR - http://www.scopus.com/inward/record.url?scp=85131665310&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131665310&partnerID=8YFLogxK
U2 - 10.1109/TAFFC.2019.2945014
DO - 10.1109/TAFFC.2019.2945014
M3 - Article
AN - SCOPUS:85131665310
VL - 13
SP - 651
EP - 665
JO - IEEE Transactions on Affective Computing
JF - IEEE Transactions on Affective Computing
SN - 1949-3045
IS - 2
ER -