Multilinear Autoencoder for 3D Face Model Learning



Generative models have proved to be useful tools to represent 3D human faces and their statistical variations. With the increase of 3D scan databases available for training, a growing challenge lies in the ability to learn generative face models that effectively encode shape variations with respect to desired attributes, such as identity and expression, given datasets that can be diverse. This paper addresses this challenge by proposing a framework that learns a generative 3D face model using an autoencoder architecture, allowing hence for weakly supervised training. The main contribution is to combine a convolutional neural network-based encoder with a multilinear model-based decoder, taking therefore advantage of both the convolutional network robustness to corrupted and incomplete data, and of the multilinear model capacity to effectively model and decouple shape variations. Given a set of 3D face scans with annotation labels for the desired attributes, e.g. identities and expressions, our method learns an expressive multilinear model that decouples shape changes due to the different factors. Experimental results demonstrate that the proposed method outperforms recent approaches when learning multilinear face models from incomplete training data, particularly in terms of space decoupling, and that it is capable of learning from an order of magnitude more data than previous methods.


  author    = "Fern{\'a}ndez Abrevaya, V. and Wuhrer, S. and Boyer, E.",
  title     = "Multilinear Autoencoder for 3D Face Model Learning",
  booktitle = "Applications of Computer Vision (WACV), 2018 IEEE Winter Conference on",
  year      = "2018"


Code for regression and expression transfer using trained models can be found here.

Code for training the autoencoder can be found here.



The Bu+Bosph model was trained using the BU-3DFE and Bosphorus datasets.
If you use this model in your publication, please cite:

The D3DFACS model was trained using the BU-3DFE, Bosphorus and D3DFACS datasets.
If you use this model in your publication, please cite:

Copyright Notice

The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. This page style is taken from Guillaume Seguin.