Building Segmentation through a Gated Graph Convolutional Neural Network
with Deep Structured Feature Embedding
release_jmg2fbqomzht7cgnzuqhtz2sw4
by
Yilei Shi, Qingyu Li, Xiao Xiang Zhu
2019
Abstract
Automatic building extraction from optical imagery remains a challenge due
to, for example, the complexity of building shapes. Semantic segmentation is an
efficient approach for this task. The latest development in deep convolutional
neural networks (DCNNs) has made accurate pixel-level classification tasks
possible. Yet one central issue remains: the precise delineation of boundaries.
Deep architectures generally fail to produce fine-grained segmentation with
accurate boundaries due to their progressive down-sampling. Hence, we introduce
a generic framework to overcome the issue, integrating the graph convolutional
network (GCN) and deep structured feature embedding (DSFE) into an end-to-end
workflow. Furthermore, instead of using a classic graph convolutional neural
network, we propose a gated graph convolutional network, which enables the
refinement of weak and coarse semantic predictions to generate sharp borders
and fine-grained pixel-level classification. Taking the semantic segmentation
of building footprints as a practical example, we compared different feature
embedding architectures and graph neural networks. Our proposed framework with
the new GCN architecture outperforms state-of-the-art approaches. Although our
main task in this work is building footprint extraction, the proposed method
can be generally applied to other binary or multi-label segmentation tasks.
In text/plain
format
Archived Files and Locations
application/pdf 41.9 MB
file_s4lr66hs35eg3f4mfxmyeomtv4
|
arxiv.org (repository) web.archive.org (webarchive) |
1911.03165v1
access all versions, variants, and formats of this works (eg, pre-prints)