Ajou University repository

3D head pose estimation through facial features and deep convolutional neural networksoa mark
  • Khan, Khalil ;
  • Ali, Jehad ;
  • Ahmad, Kashif ;
  • Gul, Asma ;
  • Sarwar, Ghulam ;
  • Khan, Sahib ;
  • Ta, Qui Thanh Hoai ;
  • Chung, Tae Sun ;
  • Attique, Muhammad
Citations

SCOPUS

9

Citation Export

Publication Year
2020-01-01
Journal
Computers, Materials and Continua
Publisher
Tech Science Press
Citation
Computers, Materials and Continua, Vol.66 No.2, pp.1757-1770
Keyword
Face image analysisFace parsingFace pose estimation
Mesh Keyword
Boston UniversityFace image analysisFace pose estimationGray-scale imagesHuman face imageLarge-scale applicationsProbabilistic classification methodProbability maps
All Science Classification Codes (ASJC)
BiomaterialsModeling and SimulationMechanics of MaterialsComputer Science ApplicationsElectrical and Electronic Engineering
Abstract
Face image analysis is one among several important cues in computer vision. Over the last five decades, methods for face analysis have received immense attention due to large scale applications in various face analysis tasks. Face parsing strongly benefits various human face image analysis tasks inducing face pose estimation. In this paper we propose a 3D head pose estimation framework developed through a prior end to end deep face parsing model. We have developed an end to end face parts segmentation framework through deep convolutional neural networks (DCNNs). For training a deep face parts parsing model, we label face images for seven different classes, including eyes, brows, nose, hair, mouth, skin, and back. We extract features from gray scale images by using DCNNs. We train a classifier using the extracted features. We use the probabilistic classification method to produce gray scale images in the form of probability maps for each dense semantic class. We use a next stage of DCNNs and extract features from grayscale images created as probability maps during the segmentation phase. We assess the performance of our newly proposed model on four standard head pose datasets, including Pointing'04, Annotated Facial Landmarks in the Wild (AFLW), Boston University (BU), and ICT-3DHP, obtaining superior results as compared to previous results.
ISSN
1546-2226
Language
eng
URI
https://aurora.ajou.ac.kr/handle/2018.oak/31698
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85097140696&origin=inward
DOI
https://doi.org/10.32604/cmc.2020.013590
Journal URL
https://www.techscience.com/cmc/v66n2/40676
Type
Article
Funding
Funding Statement: This work was partially supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2020-0-01592) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant (2019R1F1A1058548) and Grant (2020R1G1A1013221).
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

ALI JEHAD Image
ALI JEHADJEHAD, ALI
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.