Visual-Inertial-Semantic Scene Representation for 3-D Object Detection release_vg37xf55hvdylcvtxaam2xn4yy

by Jingming Dong, Xiaohan Fei, Stefano Soatto

Released as a report .

2017  

Abstract

We describe a system to detect objects in three-dimensional space using video and inertial sensors (accelerometer and gyrometer), ubiquitous in modern mobile platforms from phones to drones. Inertials afford the ability to impose class-specific scale priors for objects, and provide a global orientation reference. A minimal sufficient representation, the posterior of semantic (identity) and syntactic (pose) attributes of objects in space, can be decomposed into a geometric term, which can be maintained by a localization-and-mapping filter, and a likelihood function, which can be approximated by a discriminatively-trained convolutional neural network. The resulting system can process the video stream causally in real time, and provides a representation of objects in the scene that is persistent: Confidence in the presence of objects grows with evidence, and objects previously seen are kept in memory even when temporarily occluded, with their return into view automatically predicted to prime re-detection.
In text/plain format

Archived Files and Locations

application/pdf  5.1 MB
file_az642z5fdbadlcuicpylj7ud5a
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  report
Stage   submitted
Date   2017-04-17
Version   v2
Language   en ?
Number  CSD160005
arXiv  1606.03968v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 40ff35cc-2fec-4297-8796-db050c759e34
API URL: JSON