Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point
Clouds and Analytic Grasp Metrics
release_3qc5ofrh7fdjbp7lqpz52oybba
by
Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard
Doan, Xinyu Liu, Juan Aparicio Ojea, Ken Goldberg
2017
Abstract
To reduce data collection time for deep learning of robust robotic grasp
plans, we explore training from a synthetic dataset of 6.7 million point
clouds, grasps, and analytic grasp metrics generated from thousands of 3D
models from Dex-Net 1.0 in randomized poses on a table. We use the resulting
dataset, Dex-Net 2.0, to train a Grasp Quality Convolutional Neural Network
(GQ-CNN) model that rapidly predicts the probability of success of grasps from
depth images, where grasps are specified as the planar position, angle, and
depth of a gripper relative to an RGB-D sensor. Experiments with over 1,000
trials on an ABB YuMi comparing grasp planning methods on singulated objects
suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be
used to plan grasps in 0.8sec with a success rate of 93% on eight known objects
with adversarial geometry and is 3x faster than registering point clouds to a
precomputed dataset of objects and indexing grasps. The Dex-Net 2.0 grasp
planner also has the highest success rate on a dataset of 10 novel rigid
objects and achieves 99% precision (one false positive out of 69 grasps
classified as robust) on a dataset of 40 novel household objects, some of which
are articulated or deformable. Code, datasets, videos, and supplementary
material are available at http://berkeleyautomation.github.io/dex-net .
In text/plain
format
Archived Files and Locations
application/pdf 8.1 MB
file_5dlkm65t45a7pl7umylaeed3ea
|
arxiv.org (repository) web.archive.org (webarchive) |
1703.09312v2
access all versions, variants, and formats of this works (eg, pre-prints)