Static and Dynamic Cues to Male Attractiveness
release_3qojjtwzfjd25hrh4nij34frge
by
Wilma Latuny
2021 p190-197
Abstract
Abstract Most studies on facial attractiveness have relied on attractiveness judged from photographs rather than video clips. Only a few studies combined images and video sequences as stimuli. In order to determine static and dynamic cues to male attractiveness, we perform behavioural and computational analyses of the Mr. World 2014 contestants. We asked 365 participants to assess the attractiveness of images or video sequences (thin slices) taken from the profile videos of the Mr. World 2014 contestants. Each participant rated the attractiveness on a 7-point scale, ranging from very unattractive to very attractive. In addition, we performed computational analyses of the landmark representations of faces in images and videos to determine which types of static and dynamic facial information predict the attractiveness ratings. The behavioural study revealed that: (1) the attractiveness assessments of images and video sequences are highly correlated, and (2) the attractiveness assessment of videos was on average 0:25 point above that of images. The computational study showed (i) that for images and video sequence, three established measures of attractiveness correlate with attractiveness, and (ii) mouth movements correlate negatively with attractiveness ratings. The conclusion of the study is that thin slices of dynamical facial expressions contribute to the attractiveness of males in two ways: (i) in a positive way and (ii) in a negative way. The positive contribution is that presenting a male face in a dynamic way leads to a slight increase in attractiveness rating. The negative contribution is that mouth movements correlate negatively with attractiveness ratings.
In application/xml+jats
format
Archived Files and Locations
application/pdf 1.4 MB
file_fh2zjy7kwjeezduiuyrvcstmdu
|
ojs3.unpatti.ac.id (publisher) web.archive.org (webarchive) |
article-journal
Stage
published
Date 2021-07-18
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar