Deep Burst Super-Resolution
release_qptpsyomonhgbdqqwjfftgaxcy
by
Goutam Bhat and Martin Danelljan and Luc Van Gool and Radu Timofte
2021
Abstract
While single-image super-resolution (SISR) has attracted substantial interest
in recent years, the proposed approaches are limited to learning image priors
in order to add high frequency details. In contrast, multi-frame
super-resolution (MFSR) offers the possibility of reconstructing rich details
by combining signal information from multiple shifted images. This key
advantage, along with the increasing popularity of burst photography, have made
MFSR an important problem for real-world applications.
We propose a novel architecture for the burst super-resolution task. Our
network takes multiple noisy RAW images as input, and generates a denoised,
super-resolved RGB image as output. This is achieved by explicitly aligning
deep embeddings of the input frames using pixel-wise optical flow. The
information from all frames are then adaptively merged using an attention-based
fusion module. In order to enable training and evaluation on real-world data,
we additionally introduce the BurstSR dataset, consisting of smartphone bursts
and high-resolution DSLR ground-truth. We perform comprehensive experimental
analysis, demonstrating the effectiveness of the proposed architecture.
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2101.10997v1
access all versions, variants, and formats of this works (eg, pre-prints)