Do you like average faces best? Adjust the sliders and find out! *
If you find the avatars at the center of the feature space more attractive than those at the perimeters, you are not alone. Human beings are known to be highly adept at distinguishing faces1 and appear to share a preference for average faces2,3 with particular aspect ratios4.
Such a preference for faces that are not too long, not too short, not too wide, yet not too narrow suggested to us that an online optimization of the attractiveness of a computer generated avatar (in response to evaluations of its attractiveness by human raters) could be an excellent test of the optimization powers of our “Lock in Feedback” (LiF) algorithm5.
So we build us an experiment that would allow LiF to pick up on people’s preference for averaged faces within a “distance between the eyes” versus “brow-nose-chin-ratio” feature space. To make sure LiF was not just moving towards a random position we repeated the experiment twice, starting out at two separate locations in attribute space – represented in the figure below by arrows a and b.
Happily, and as shown in the chart below, LiF proved up to the task (for further background information, see our paper here):
A result all the more impressive if you keep in mind that a streaming algorithm such as LiF has limited memory available to it. Within the current experiment, the difference between faces within this range was generally as small as between the faces shown below:
Yet LiF still proved able to pick up on this very, very subtle signal and move steadily towards an ever more average face – despite an enormous amount of interrater noise (the dip in the chart below results from a “shock” to the system on repositioning – demonstrating LiF’s robustness and “lock in” capabilities):
* The form consists of just 10 x 10 avatars, which represents an approximation of the 100 x 100 avatar matrix used in our experiments.
Some additional technical background information
For those interested in repeating our experiment, some pointers on how we generated our 100 x 100 matrix.
We started out by generating nine faces (as represented by the nine faces in a previous figure) through FaceGen Modeler (whose “default” face is itself the average of a large set of facial models, which can be adjusted over a range of anatomically realistic attributes) 6,7 :
We then employed FantaMorph to morph between these nine faces, first generating the left and right perimeter of our matrix, then morphing between each of the perimeters:
To complete the setup, we uploaded the 100 x 100 images to our web server and asked each of the participants of our survey to score the attractiveness of a face, as currently selected by LiF.
Additionally, if you would like use or adapt the “Face Explorer” as shown at the top of the page, you can find its source code here:
6 “The ‘Shape’ and ‘Color’ tab sliders are linear projections in a “face space” consisting of 50 dimensions of symmetric shape, 30 dimensions of asymmetric shape and 50 dimensions of symmetric color. The age, gender, and racial group sliders are linear regressions on our data set in that space. The caricature and asymmetry sliders are difference magnitudes from the mean for the given age, gender, and racial group. Our data set consisted of 273 high-resolution 3D face scans.” (http://facegen.com/faq.htm)
7 Said, C. P., & Todorov, A. (2011). A statistical model of facial attractiveness. Psychological science.