Methodology for Building Synthetic Datasets with Virtual Humans release_njk7mbtmszc3tjqsq3gxa6q6fm

by Shubhajit Basak, Hossein Javidnia, Faisal Khan, Rachel McDonnell, Michael Schukat

Released as a article .

2020  

Abstract

Recent advances in deep learning methods have increased the performance of face detection and recognition systems. The accuracy of these models relies on the range of variation provided in the training data. Creating a dataset that represents all variations of real-world faces is not feasible as the control over the quality of the data decreases with the size of the dataset. Repeatability of data is another challenge as it is not possible to exactly recreate 'real-world' acquisition conditions outside of the laboratory. In this work, we explore a framework to synthetically generate facial data to be used as part of a toolchain to generate very large facial datasets with a high degree of control over facial and environmental variations. Such large datasets can be used for improved, targeted training of deep neural networks. In particular, we make use of a 3D morphable face model for the rendering of multiple 2D images across a dataset of 100 synthetic identities, providing full control over image variations such as pose, illumination, and background.
In text/plain format

Archived Files and Locations

application/pdf  650.8 kB
file_622gfqfnwbez3ekalxzd4ejz4i
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-06-21
Version   v1
Language   en ?
arXiv  2006.11757v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 2816202c-eea2-4fa3-a259-4cc3497d9991
API URL: JSON