Comparing the Quality of Highly Realistic Digital Humans in 3DoF and 6DoF: A Volumetric Video Case Study
release_ppqqpjxaevav3ef6jhsvfjzrmq
by
Shishir Subramanyam, Jie Li, Irene Viola, Pablo Cesar
2020
Abstract
Virtual Reality (VR) and Augmented Reality (AR) applications have seen a drastic increase in commercial popularity. Different representations have been used to create 3D reconstructions for AR and VR. Point clouds are one such representation characterized by their simplicity and versatility, making them suitable for real time applications, such as reconstructing humans for social virtual reality. In this study, we evaluate how the visual quality of digital humans, represented using point clouds, is affected by compression distortions. We compare the performance of the upcoming point cloud compression standard against an octree-based anchor codec. Two different VR viewing conditions enabling 3- and 6 degrees of freedom are tested, to understand how interacting in the virtual space affects the perception of quality. To the best of our knowledge, this is the first work performing user quality evaluation of dynamic point clouds in VR; in addition, contributions of the paper include quantitative data and empirical findings. Results highlight how perceived visual quality is affected by the tested content, and how current data sets might not be sufficient to comprehensively evaluate compression solutions. Moreover, shortcomings in how point cloud encoding solutions handle visually-lossless compression are discussed.
In text/plain
format
Archived Files and Locations
application/pdf 751.0 kB
file_u2slgvoip5ddrczb3xcwkp6rim
|
zenodo.org (repository) web.archive.org (webarchive) |
access all versions, variants, and formats of this works (eg, pre-prints)
Datacite Metadata (via API)
Worldcat
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar