TY - JOUR
T1 - 3D/4D facial expression analysis
T2 - An advanced annotated face model approach
AU - Fang, Tianhong
AU - Zhao, Xi
AU - Ocegueda, Omar
AU - Shah, Shishir K.
AU - Kakadiaris, Ioannis A.
N1 - Funding Information:
This research was funded in part by the Office of the Director of National Intelligence (ODNI) , Intelligence Advanced Research Projects Activity (IARPA) , through the Army Research Laboratory (ARL) and by the University of Houston (UH) Eckhard Pfeiffer Endowment Fund . All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, the U.S. Government, or UH.
PY - 2012/10
Y1 - 2012/10
N2 - Facial expression analysis has interested many researchers in the past decade due to its potential applications in various fields such as human-computer interaction, psychological studies, and facial animation. Three-dimensional facial data has been proven to be insensitive to illumination condition and head pose, and has hence gathered attention in recent years. In this paper, we focus on discrete expression classification using 3D data from the human face. The paper is divided in two parts. In the first part, we present improvement to the fitting of the Annotated Face Model (AFM) so that a dense point correspondence can be found in terms of both position and semantics among static 3D face scans or frames in 3D face sequences. Then, an expression recognition framework on static 3D images is presented. It is based on a Point Distribution Model (PDM) which can be built on different features. In the second part of this article, a systematic pipeline that operates on dynamic 3D sequences (4D datasets or 3D videos) is proposed and alternative modules are investigated as a comparative study. We evaluated both 3D and 4D Facial Expression Recognition pipelines on two publicly available facial expression databases and obtained promising results.
AB - Facial expression analysis has interested many researchers in the past decade due to its potential applications in various fields such as human-computer interaction, psychological studies, and facial animation. Three-dimensional facial data has been proven to be insensitive to illumination condition and head pose, and has hence gathered attention in recent years. In this paper, we focus on discrete expression classification using 3D data from the human face. The paper is divided in two parts. In the first part, we present improvement to the fitting of the Annotated Face Model (AFM) so that a dense point correspondence can be found in terms of both position and semantics among static 3D face scans or frames in 3D face sequences. Then, an expression recognition framework on static 3D images is presented. It is based on a Point Distribution Model (PDM) which can be built on different features. In the second part of this article, a systematic pipeline that operates on dynamic 3D sequences (4D datasets or 3D videos) is proposed and alternative modules are investigated as a comparative study. We evaluated both 3D and 4D Facial Expression Recognition pipelines on two publicly available facial expression databases and obtained promising results.
KW - 3D face models
KW - 4D face videos
KW - Expression recognition
KW - Mesh registration
UR - http://www.scopus.com/inward/record.url?scp=84866734270&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84866734270&partnerID=8YFLogxK
U2 - 10.1016/j.imavis.2012.02.004
DO - 10.1016/j.imavis.2012.02.004
M3 - Article
AN - SCOPUS:84866734270
VL - 30
SP - 738
EP - 749
JO - Image and Vision Computing
JF - Image and Vision Computing
SN - 0262-8856
IS - 10
ER -