Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation
release_vpjb3wgi3fguxpmlwcfys3khrm
by
Haiwen Feng, Timo Bolkart, Joachim Tesch, Michael J. Black, Victoria Abrevaya
2022
Abstract
Virtual facial avatars will play an increasingly important role in immersive
communication, games and the metaverse, and it is therefore critical that they
be inclusive. This requires accurate recovery of the appearance, represented by
albedo, regardless of age, sex, or ethnicity. While significant progress has
been made on estimating 3D facial geometry, albedo estimation has received less
attention. The task is fundamentally ambiguous because the observed color is a
function of albedo and lighting, both of which are unknown. We find that
current methods are biased towards light skin tones due to (1) strongly biased
priors that prefer lighter pigmentation and (2) algorithmic solutions that
disregard the light/albedo ambiguity. To address this, we propose a new
evaluation dataset (FAIR) and an algorithm (TRUST) to improve albedo estimation
and, hence, fairness. Specifically, we create the first facial albedo
evaluation benchmark where subjects are balanced in terms of skin color, and
measure accuracy using the Individual Typology Angle (ITA) metric. We then
address the light/albedo ambiguity by building on a key observation: the image
of the full scene -- as opposed to a cropped image of the face -- contains
important information about lighting that can be used for disambiguation. TRUST
regresses facial albedo by conditioning both on the face region and a global
illumination signal obtained from the scene image. Our experimental results
show significant improvement compared to state-of-the-art methods on albedo
estimation, both in terms of accuracy and fairness. The evaluation benchmark
and code will be made available for research purposes at
https://trust.is.tue.mpg.de.
In text/plain
format
Archived Files and Locations
application/pdf 3.6 MB
file_kfk23xbxbnbflptjoggjgzyxlq
|
arxiv.org (repository) web.archive.org (webarchive) |
2205.03962v2
access all versions, variants, and formats of this works (eg, pre-prints)