Learning 3D Human Body Embedding
release_eepjmzlv5veivc7patgo3tpnyq
by
Boyi Jiang, Juyong Zhang, Jianfei Cai, Jianmin Zheng
2019
Abstract
Although human body shapes vary for different identities with different
poses, they can be embedded into a low-dimensional space due to their
similarity in structure. Inspired by the recent work on latent representation
learning with a deformation-based mesh representation, we propose an
autoencoder like network architecture to learn disentangled shape and pose
embedding specifically for 3D human body. We also integrate a coarse-to-fine
reconstruction pipeline into the disentangling process to improve the
reconstruction accuracy. Moreover, we construct a large dataset of human body
models with consistent topology for the learning of neural network. Our learned
embedding can achieve not only superior reconstruction accuracy but also
provide great flexibilities in 3D human body creations via interpolation,
bilateral interpolation and latent space sampling, which is confirmed by
extensive experiments. The constructed dataset and trained model will be made
publicly available.
In text/plain
format
Archived Files and Locations
application/pdf 8.8 MB
file_kgyf3y6yo5eehoie67mqzavfgm
|
arxiv.org (repository) web.archive.org (webarchive) |
1905.05622v1
access all versions, variants, and formats of this works (eg, pre-prints)