GoDP: Globally Optimized Dual Pathway deep network architecture for facial landmark localization in-the-wild

Yuhang Wu, Shishir K. Shah, Ioannis A. Kakadiaris

Research output: Contribution to journalArticlepeer-review

12 Scopus citations


Facial landmark localization is a fundamental module for pose-invariant face recognition. The most common approach for facial landmark detection is cascaded regression, which is composed of two steps: feature extraction and facial shape regression. Recent methods employ deep convolutional networks to extract robust features for each step, while the whole system could be regarded as a deep cascaded regression architecture. In this work, instead of employing a deep regression network, a Globally Optimized Dual-Pathway (GoDP) deep architecture is proposed to identify the target pixels through solving a cascaded pixel labeling problem without resorting to high-level inference models or complex stacked architecture. The proposed end-to-end system relies on distance-aware soft-max functions and dual-pathway proposal-refinement architecture. Results show that it outperforms the state-of-the-art cascaded regression-based methods on multiple in-the-wild face alignment databases. The model achieves 1.84 normalized mean error (NME) on the AFLW database [1], which outperforms 3DDFA [2] by 61.8%. Experiments on face identification demonstrate that GoDP, coupled with DPM-headhunter [3], is able to improve rank-1 identification rate by 44.2% compare to Dlib [4] toolbox on a challenging database.

Original languageEnglish (US)
Pages (from-to)1-16
Number of pages16
JournalImage and Vision Computing
StatePublished - May 2018


  • Deep learning
  • Face alignment
  • Face recognition
  • Facial landmark localization

ASJC Scopus subject areas

  • Signal Processing
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'GoDP: Globally Optimized Dual Pathway deep network architecture for facial landmark localization in-the-wild'. Together they form a unique fingerprint.

Cite this