Learning 3D Human Body Embedding release_eepjmzlv5veivc7patgo3tpnyq

by Boyi Jiang, Juyong Zhang, Jianfei Cai, Jianmin Zheng

Released as a article .

2019  

Abstract

Although human body shapes vary for different identities with different poses, they can be embedded into a low-dimensional space due to their similarity in structure. Inspired by the recent work on latent representation learning with a deformation-based mesh representation, we propose an autoencoder like network architecture to learn disentangled shape and pose embedding specifically for 3D human body. We also integrate a coarse-to-fine reconstruction pipeline into the disentangling process to improve the reconstruction accuracy. Moreover, we construct a large dataset of human body models with consistent topology for the learning of neural network. Our learned embedding can achieve not only superior reconstruction accuracy but also provide great flexibilities in 3D human body creations via interpolation, bilateral interpolation and latent space sampling, which is confirmed by extensive experiments. The constructed dataset and trained model will be made publicly available.
In text/plain format

Archived Files and Locations

application/pdf  8.8 MB
file_kgyf3y6yo5eehoie67mqzavfgm
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-05-14
Version   v1
Language   en ?
arXiv  1905.05622v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: dcbf4dfa-fb29-479c-a838-faa16f4bf22f
API URL: JSON