Visual-Inertial-Semantic Scene Representation for 3-D Object Detection
release_vg37xf55hvdylcvtxaam2xn4yy
by
Jingming Dong, Xiaohan Fei, Stefano Soatto
2017
Abstract
We describe a system to detect objects in three-dimensional space using video
and inertial sensors (accelerometer and gyrometer), ubiquitous in modern mobile
platforms from phones to drones. Inertials afford the ability to impose
class-specific scale priors for objects, and provide a global orientation
reference. A minimal sufficient representation, the posterior of semantic
(identity) and syntactic (pose) attributes of objects in space, can be
decomposed into a geometric term, which can be maintained by a
localization-and-mapping filter, and a likelihood function, which can be
approximated by a discriminatively-trained convolutional neural network. The
resulting system can process the video stream causally in real time, and
provides a representation of objects in the scene that is persistent:
Confidence in the presence of objects grows with evidence, and objects
previously seen are kept in memory even when temporarily occluded, with their
return into view automatically predicted to prime re-detection.
In text/plain
format
Archived Files and Locations
application/pdf 5.1 MB
file_az642z5fdbadlcuicpylj7ud5a
|
arxiv.org (repository) web.archive.org (webarchive) |
access all versions, variants, and formats of this works (eg, pre-prints)