Occlusion-Robust Online Multi-Object Visual Tracking using a GM-PHD Filter with CNN-Based Re-Identification
release_3uulrfckwjdkloe2uztlrcjgmu
by
Nathanael L. Baisa
2020
Abstract
We propose a novel online multi-object visual tracking algorithm via a
tracking-by-detection paradigm using a Gaussian mixture Probability Hypothesis
Density (GM-PHD) filter and deep Convolutional Neural Network (CNN) appearance
representations learning. The GM-PHD filter has a linear complexity with the
number of objects and observations while estimating the states and cardinality
of unknown and time-varying number of objects in the scene. Though it handles
object birth, death and clutter in a unified framework, it is susceptible to
miss-detections and does not include the identity of objects. We use
visual-spatio-temporal information obtained from object bounding boxes and
deeply learned appearance representations to perform estimates-to-tracks data
association for labeling of each target as well as formulate an augmented
likelihood and then integrate into the update step of the GM-PHD filter. We
learn the deep CNN appearance representations by training an identification
network (IdNet) on large-scale person re-identification data sets. We also
employ additional unassigned tracks prediction after the data association step
to overcome the susceptibility of the GM-PHD filter towards miss-detections
caused by occlusion. Our tracker which runs in real-time is applied to track
multiple objects in video sequences acquired under varying environmental
conditions and objects density. Lastly, we make extensive evaluations on
Multiple Object Tracking 2016 (MOT16) and 2017 (MOT17) benchmark data sets and
find out that our online tracker significantly outperforms several
state-of-the-art trackers in terms of tracking accuracy and identification.
In text/plain
format
Archived Files and Locations
application/pdf 20.4 MB
file_hqgq3wizdbcvvaqjq2ul6e57be
|
arxiv.org (repository) web.archive.org (webarchive) |
1912.05949v4
access all versions, variants, and formats of this works (eg, pre-prints)