TY - JOUR
T1 - 3D Face Reconstruction from Volumes of Videos Using a Mapreduce Framework
AU - Gao, Wanshun
AU - Zhao, Xi
AU - Gao, Zhimin
AU - Zou, Jianhua
AU - Dou, Pengfei
AU - Kakadiaris, Ioannis A.
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 91746111 and Grant 71702143, in part by the Ministry of Education and China Mobile Joint Research Fund Program under Grant MCM20160302, in part by the Shaanxi Provincial Development and Reform Commission under Grant SFG2016789, in part by the Xi'an Science and Technology Bureau under Grant 2017111SF/RK005-(7), and in part by the Fundamental Research Funds for the Central Universities.
Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 91746111 and Grant 71702143, in part by the Ministry of Education and China Mobile Joint Research Fund Program under Grant MCM20160302, in part by the Shaanxi Provincial Development and Reform Commission under Grant SFG2016789, in part by the Xi’an Science and Technology Bureau under Grant 2017111SF/RK005-(7), and in part by the Fundamental Research Funds for the Central Universities.
Publisher Copyright:
© 2013 IEEE.
PY - 2019
Y1 - 2019
N2 - As video blogs become favorable to the commonage, egocentric videos generate tremendous big video data, which capture a large number of interpersonal social events. There are significant challenges on retrieving rich social information, such as human identities, emotions and other interaction information from these massive video data. Limited methods have been proposed so far to address the issue of the unlabeled data. In this paper, we present a fully-automatic system retrieving both sparse 3D facial shape and dense 3D face, from which more face-related information can be predicted during social communication. First, we localize facial landmarks from 2D videos and retrieve sparse 3D shape from motion. Second, we apply the retrieved sparse 3D shape as a prior estimation of dense 3D face mesh. To deal with big social videos in a scalable manner, we design the proposed system on a Map/Reduce framework. Tested on FEI and BU-4DFE face datasets, we improve time efficiency by 92% and 73% respectively without accuracy loss.
AB - As video blogs become favorable to the commonage, egocentric videos generate tremendous big video data, which capture a large number of interpersonal social events. There are significant challenges on retrieving rich social information, such as human identities, emotions and other interaction information from these massive video data. Limited methods have been proposed so far to address the issue of the unlabeled data. In this paper, we present a fully-automatic system retrieving both sparse 3D facial shape and dense 3D face, from which more face-related information can be predicted during social communication. First, we localize facial landmarks from 2D videos and retrieve sparse 3D shape from motion. Second, we apply the retrieved sparse 3D shape as a prior estimation of dense 3D face mesh. To deal with big social videos in a scalable manner, we design the proposed system on a Map/Reduce framework. Tested on FEI and BU-4DFE face datasets, we improve time efficiency by 92% and 73% respectively without accuracy loss.
KW - 3D face reconstruction
KW - cloud computing
KW - Facial shape retrieval
KW - map/reduce
UR - http://www.scopus.com/inward/record.url?scp=85077567204&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077567204&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2019.2938671
DO - 10.1109/ACCESS.2019.2938671
M3 - Article
AN - SCOPUS:85077567204
VL - 7
SP - 165559
EP - 165570
JO - IEEE Access
JF - IEEE Access
SN - 2169-3536
M1 - 8821354
ER -