HOnnotate: A method for 3D Annotation of Hand and Objects Poses release_hfdjulprczgdfgd32h2mu5lbzq

by Shreyas Hampali, Mahdi Rad, Markus Oberweger, Vincent Lepetit

Released as a article .

2020  

Abstract

We propose a method for annotating images of a hand manipulating an object with the 3D poses of both the hand and the object, together with a dataset created using this method. There is a current lack of annotated real images for this problem, as estimating the 3D poses is challenging, mostly because of the mutual occlusions between the hand and the object. To tackle this challenge, we capture sequences with one or several RGB-D cameras, and jointly optimizes the 3D hand and object poses over all the frames simultaneously. This method allows us to automatically annotate each frame with accurate estimates of the poses, despite large mutual occlusions. With this method, we created HO-3D, the first markerless dataset of color images with 3D annotations of both hand and object. This dataset is currently made of 80,000 frames, 65 sequences, 10 persons, and 10 objects, and growing. We also use it to train a deepnet to perform RGB-based single frame hand pose estimation and provide a baseline on our dataset.
In text/plain format

Archived Files and Locations

application/pdf  3.2 MB
file_6xa5vrlohnhwnoqllblusvp77y
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-03-02
Version   v4
Language   en ?
arXiv  1907.01481v4
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: a03df926-9879-4332-97e7-210a16d09469
API URL: JSON