Facial landmark detection in uncontrolled conditions

B. Efraty, C. Huang, S. K. Shah, I. A. Kakadiaris

Research output: Chapter in Book/Report/Conference proceedingConference contribution

31 Scopus citations

Abstract

Facial landmark detection is a fundamental step for many tasks in computer vision such as expression recognition and face alignment. In this paper, we focus on the detection of landmarks under realistic scenarios that include pose, illumination and expression challenges as well as blur and low-resolution input. In our approach, an n-point shape of point-landmarks is represented as a union of simpler polygonal sub-shapes. The core idea of our method is to find the sequence of deformation parameters simultaneously for all sub-shapes that transform each point-landmark into its target landmark location. To accomplish this task, we introduce an agglomerate of fern regressors. To optimize the convergence speed and accuracy we take advantage of search localization using component-landmark detectors, multi-scale analysis and learning of point cloud dynamics. Results from extensive experiments on facial images from several challenging publicly available databases demonstrate that our method (ACFeR) can reliably detect landmarks with accuracy comparable to commercial software and other state-of-the-art methods.

Original languageEnglish (US)
Title of host publication2011 International Joint Conference on Biometrics, IJCB 2011
DOIs
StatePublished - 2011
Event2011 International Joint Conference on Biometrics, IJCB 2011 - Washington, DC, United States
Duration: Oct 11 2011Oct 13 2011

Publication series

Name2011 International Joint Conference on Biometrics, IJCB 2011

Conference

Conference2011 International Joint Conference on Biometrics, IJCB 2011
Country/TerritoryUnited States
CityWashington, DC
Period10/11/1110/13/11

ASJC Scopus subject areas

  • Biotechnology

Fingerprint

Dive into the research topics of 'Facial landmark detection in uncontrolled conditions'. Together they form a unique fingerprint.

Cite this