Unseen Object Instance Segmentation for Robotic Environments release_u6z3zfwzene7tncwq7z6vtuusq

by Christopher Xie, Yu Xiang, Arsalan Mousavian, Dieter Fox

Released as a article .

2020  

Abstract

In order to function in unstructured environments, robots need the ability to recognize unseen objects. We take a step in this direction by tackling the problem of segmenting unseen object instances in tabletop environments. However, the type of large-scale real-world dataset required for this task typically does not exist for most robotic settings, which motivates the use of synthetic data. Our proposed method, UOIS-Net, separately leverages synthetic RGB and synthetic depth for unseen object instance segmentation. UOIS-Net is comprised of two stages: first, it operates only on depth to produce object instance center votes in 2D or 3D and assembles them into rough initial masks. Secondly, these initial masks are refined using RGB. Surprisingly, our framework is able to learn from synthetic RGB-D data where the RGB is non-photorealistic. To train our method, we introduce a large-scale synthetic dataset of random objects on tabletops. We show that our method can produce sharp and accurate segmentation masks, outperforming state-of-the-art methods on unseen object instance segmentation. We also show that our method can segment unseen objects for robot grasping.
In text/plain format

Archived Files and Locations

application/pdf  9.9 MB
file_tkmtikm7rjfwba7uvd4lqpv3r4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-07-16
Version   v1
Language   en ?
arXiv  2007.08073v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: a9c71ca7-28e6-4e6f-81bf-d95ed17cfa94
API URL: JSON