Semantic Segmentation on Multiple Visual Domains release_t7zwcdae7ze2zfcqoukuabty5i

by Floris Naber

Released as a article .

2021  

Abstract

Semantic segmentation models only perform well on the domain they are trained on and datasets for training are scarce and often have a small label-spaces, because the pixel level annotations required are expensive to make. Thus training models on multiple existing domains is desired to increase the output label-space. Current research shows that there is potential to improve accuracy across datasets by using multi-domain training, but this has not yet been successfully extended to datasets of three different non-overlapping domains without manual labelling. In this paper a method for this is proposed for the datasets Cityscapes, SUIM and SUN RGB-D, by creating a label-space that spans all classes of the datasets. Duplicate classes are merged and discrepant granularity is solved by keeping classes separate. Results show that accuracy of the multi-domain model has higher accuracy than all baseline models together, if hardware performance is equalized, as resources are not limitless, showing that models benefit from additional data even from domains that have nothing in common.
In text/plain format

Archived Files and Locations

application/pdf  2.1 MB
file_34w6fkamrbdqrcgeu7mmzbnsyi
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-07-09
Version   v1
Language   en ?
arXiv  2107.04326v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 33f27d99-d184-4356-bd78-a2b131b428aa
API URL: JSON