Benchmarking 3D pose estimation for face recognition

Pengfei Dou, Yuhang Wu, Shishir K. Shah, Ioannis A. Kakadiaris

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Scopus citations


3D-Model-Aided 2D face recognition (MaFR) has attracted a lot of attention in recent years. By registering a 3D model, facial textures of the gallery and the probe can be lifted and aligned in a common space, thus alleviating the challenge of pose variations. One obstacle preventing accurate registration is the 3D-2D pose estimation, which is easily affected by landmarks. In this work, we present the performance that state-of-the-art pose estimation algorithms could reach using state-of-the-art automatic landmark localization methods. We generated an application-specific dataset with more than 59,000 synthetic face images and ground truth camera pose and landmarks, covering 45 poses and six illumination conditions. Our experiments compared four recently proposed pose estimation algorithms using 2D landmarks detected by two automatic methods. Our results highlight one near-real-time landmark detection method and a highly accurate pose estimation algorithm, which would potentially boost the 3D-Model-Aided 2D face recognition performance.

Original languageEnglish (US)
Title of host publicationProceedings - International Conference on Pattern Recognition
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)9781479952083
StatePublished - Dec 4 2014
Event22nd International Conference on Pattern Recognition, ICPR 2014 - Stockholm, Sweden
Duration: Aug 24 2014Aug 28 2014

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651


Conference22nd International Conference on Pattern Recognition, ICPR 2014

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Benchmarking 3D pose estimation for face recognition'. Together they form a unique fingerprint.

Cite this