FaR-GAN for One-Shot Face Reenactment release_adicx22cabhqzizningfm26tnm

by Hanxiang Hao and Sriram Baireddy and Amy R. Reibman and Edward J. Delp

Released as a article .

2020  

Abstract

Animating a static face image with target facial expressions and movements is important in the area of image editing and movie production. This face reenactment process is challenging due to the complex geometry and movement of human faces. Previous work usually requires a large set of images from the same person to model the appearance. In this paper, we present a one-shot face reenactment model, FaR-GAN, that takes only one face image of any given source identity and a target expression as input, and then produces a face image of the same source identity but with the target expression. The proposed method makes no assumptions about the source identity, facial expression, head pose, or even image background. We evaluate our method on the VoxCeleb1 dataset and show that our method is able to generate a higher quality face image than the compared methods.
In text/plain format

Archived Files and Locations

application/pdf  3.0 MB
file_7e77jkv5x5frhfwkbxvqiqupdm
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-05-13
Version   v1
Language   en ?
arXiv  2005.06402v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: b7351eae-ae51-4400-9264-95d871ca343a
API URL: JSON