Do Game Data Generalize Well for Remote Sensing Image Segmentation?
release_6g5de2wkgbdqpd6xmxvmxprciy
by
Zhengxia Zou, Tianyang Shi, Wenyuan Li, Zhou Zhang, Zhenwei Shi
Abstract
Despite the recent progress in deep learning and remote sensing image interpretation, the adaption of a deep learning model between different sources of remote sensing data still remains a challenge. This paper investigates an interesting question: do synthetic data generalize well for remote sensing image applications? To answer this question, we take the building segmentation as an example by training a deep learning model on the city map of a well-known video game "Grand Theft Auto V" and then adapting the model to real-world remote sensing images. We propose a generative adversarial training based segmentation framework to improve the adaptability of the segmentation model. Our model consists of a CycleGAN model and a ResNet based segmentation network, where the former one is a well-known image-to-image translation framework which learns a mapping of the image from the game domain to the remote sensing domain; and the latter one learns to predict pixel-wise building masks based on the transformed data. All models in our method can be trained in an end-to-end fashion. The segmentation model can be trained without using any additional ground truth reference of the real-world images. Experimental results on a public building segmentation dataset suggest the effectiveness of our adaptation method. Our method shows superiority over other state-of-the-art semantic segmentation methods, for example, Deeplab-v3 and UNet. Another advantage of our method is that by introducing semantic information to the image-to-image translation framework, the image style conversion can be further improved.
In application/xml+jats
format
Archived Files and Locations
application/pdf 4.9 MB
file_jszouk3shnbh3j7dyvqj6wdg3y
|
res.mdpi.com (publisher) web.archive.org (webarchive) |
Open Access Publication
In DOAJ
In ISSN ROAD
In Keepers Registry
ISSN-L:
2072-4292
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar