Learning Mobile Manipulation
release_lbzovowk4zb53eqzavncchsi5q
by
David Watkins
2022
Abstract
Providing mobile robots with the ability to manipulate objects has, despite
decades of research, remained a challenging problem. The problem is
approachable in constrained environments where there is ample prior knowledge
of the environment layout and manipulatable objects. The challenge is in
building systems that scale beyond specific situational instances and
gracefully operate in novel conditions. In the past, researchers used heuristic
and simple rule-based strategies to accomplish tasks such as scene segmentation
or reasoning about occlusion. These heuristic strategies work in constrained
environments where a roboticist can make simplifying assumptions about
everything from the geometries of the objects to be interacted with, level of
clutter, camera position, lighting, and a myriad of other relevant variables.
The work in this thesis will demonstrate how to build a system for robotic
mobile manipulation that is robust to changes in these variables. This
robustness will be enabled by recent simultaneous advances in the fields of big
data, deep learning, and simulation. The ability of simulators to create
realistic sensory data enables the generation of massive corpora of labeled
training data for various grasping and navigation-based tasks. It is now
possible to build systems that work in the real world trained using deep
learning entirely on synthetic data. The ability to train and test on synthetic
data allows for quick iterative development of new perception, planning and
grasp execution algorithms that work in many environments.
In text/plain
format
Archived Files and Locations
application/pdf 32.9 MB
file_pdmrce3xgfdypbmdogvbglv3sy
|
arxiv.org (repository) web.archive.org (webarchive) |
2206.03448v1
access all versions, variants, and formats of this works (eg, pre-prints)