Many image restoration problems are ill-posed in nature, hence, beyond the input image, most existing methods rely on a carefully engineered image prior, which enforces some local image consistency in the recovered image. How tightly the prior assumptions are fulfilled has a big impact on the resulting task performance. To obtain more flexibility, in this work, we proposed to design the image prior in a data-driven manner. Instead of explicitly defining the prior, we learn it using deep generative models. We demonstrate that this learned prior can be applied to many image restoration problems using an unified framework.
This projects shares the same repository as our previous work on semantic image inpainting, checkout the extensions branch. [Github]
We provide a visual comparison of our results against Total Variation (TV) minimization, Sparse coding, and nuclear norm minimization (a.k.a Low-Rank (LR)) methods for the following image restoration tasks.
Semantic Inpainting: The center patch of 32 × 32 is replaced with zeros, this missing region is assumed to known and available to the restoration method.
Colorization: The original image was converted to grayscale using the standard weighting of the RGB channels.
Super Resolution: The original image was downsampled by a factor of 4;
Denoising: An additive Gaussian noise, with standard deviation of 0.1 (pixel intensities from 0 to 1) is applied to the original image.
Quantization: The original image is quantized to discrete levels per channel. i] Display in new tab/window