Unsupervised Single-Image Super-Resolution with Multi-Gram Loss
release_thbnxnljgbc25ktechw5w2yiam
by
Yong Shi, Biao Li, Bo Wang, Zhiquan Qi, Jiabin Liu
Abstract
Recently, supervised deep super-resolution (SR) networks have achieved great success in both accuracy and texture generation. However, most methods train in the dataset with a fixed kernel (such as bicubic) between high-resolution images and their low-resolution counterparts. In real-life applications, pictures are always disturbed with additional artifacts, e.g., non-ideal point-spread function in old film photos, and compression loss in cellphone photos. How to generate a satisfactory SR image from the specific prior single low-resolution (LR) image is still a challenging issue. In this paper, we propose a novel unsupervised method named unsupervised single-image SR with multi-gram loss (UMGSR) to overcome the dilemma. There are two significant contributions in this paper: (a) we design a new architecture for extracting more information from limited inputs by combining the local residual block and two-step global residual learning; (b) we introduce the multi-gram loss for SR task to effectively generate better image details. Experimental comparison shows that our unsupervised method in normal conditions can attain better visual results than other supervised SR methods.
In application/xml+jats
format
Archived Files and Locations
application/pdf 2.3 MB
file_en4hncghyjbtxacoz5e3orvwme
|
web.archive.org (webarchive) res.mdpi.com (publisher) |
Open Access Publication
In DOAJ
In ISSN ROAD
In Keepers Registry
ISSN-L:
2079-9292
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar