Publications Freek Stulp


Back to Homepage
Sorted by DateClassified by Publication TypeClassified by Research Category
Face Model Fitting with Generic, Group-specific, and Person-specific Objective Functions
Sylvia Pietzsch, Matthias Wimmer, Freek Stulp, and Bernd Radig. Face Model Fitting with Generic, Group-specific, and Person-specific Objective Functions. In 3rd International Conference on Computer Vision Theory and Applications (VISAPP), January 2008.
Download
[PDF]1.4MB  
Abstract
In model-based fitting, the model parameters that best fit the image are determined by searching for the optimum of an objective function. Often, this function is designed manually, based on implicit and domain-dependent knowledge. We acquire more robust objective function by learning them from annotated images, in which many critical decisions are automated, and the remaining manual steps do not require domain knowledge. Still, the trade-off between generality and accuracy remains. General functions can be applied to a large range of objects, whereas specific functions describe a subset of objects more accurately. Gross et al. have demonstrated this principle by comparing generic to person-specific Active Appearance Models. As it is impossible to learn a person-specific objective function for the entire human population, we automatically partition the training images and then learn partition-specific functions. The number of groups influences the specificity of the learned functions. We automatically determine the optimal partitioning given the number of groups, by minimizing the expected fitting error. Our empirical evaluation demonstrates that the group-specific objective functions more accurately describe the images of the corresponding group. The results of this paper are especially relevant to face model tracking, as individual faces will not change throughout an image sequence.
BibTeX
@InProceedings{pietzsch08face,
  title                    = {Face Model Fitting with Generic, Group-specific, and Person-specific Objective Functions},
  author                   = {Sylvia Pietzsch and Matthias Wimmer and Freek Stulp and Bernd Radig},
  booktitle                = {3rd International Conference on Computer Vision Theory and Applications~(VISAPP)},
  year                     = {2008},
  month                    = {January},
  abstract                 = {In model-based fitting, the model parameters that best fit the image are determined by searching for the optimum of an objective function. Often, this function is designed manually, based on implicit and domain-dependent knowledge. We acquire more robust objective function by learning them from annotated images, in which many critical decisions are automated, and the remaining manual steps do not require domain knowledge. Still, the trade-off between generality and accuracy remains. General functions can be applied to a large range of objects, whereas specific functions describe a subset of objects more accurately. Gross et al. have demonstrated this principle by comparing generic to person-specific Active Appearance Models. As it is impossible to learn a person-specific objective function for the entire human population, we automatically partition the training images and then learn partition-specific functions. The number of groups influences the specificity of the learned functions. We automatically determine the optimal partitioning given the number of groups, by minimizing the expected fitting error. Our empirical evaluation demonstrates that the group-specific objective functions more accurately describe the images of the corresponding group. The results of this paper are especially relevant to face model tracking, as individual faces will not change throughout an image sequence.},
  bib2html_pubtype         = {Refereed Conference Paper},
  bib2html_rescat          = {Learning Objective Functions for Face Model Fitting}
}

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints.


Generated by bib2html.pl (written by Patrick Riley ) on Mon Jul 20, 2015 21:50:11