We propose blur, noise, and compression robust GAN (BNCR-GAN) that can learn a clean image generator directly from degraded images without knowledge of degradation parameters (e.g., blur kernel types, noise amounts, or quality factor values).
Our related previous work
GAN for noise: Noise Robust GAN (CVPR 2020) |
GAN for label noise: Label-Noise Robust GAN (CVPR 2019) |
GAN for ambiguous labels: Classifier's Posterior GAN (BMVC 2019) |
Abstract
Generative adversarial networks (GANs) have gained considerable attention owing to their ability to reproduce images. However, they can recreate training images faithfully despite image degradation in the form of blur, noise, and compression, generating similarly degraded images. To solve this problem, the recently proposed noise robust GAN (NR-GAN) provides a partial solution by demonstrating the ability to learn a clean image generator directly from noisy images using a two-generator model comprising image and noise generators. However, its application is limited to noise, which is relatively easy to decompose owing to its additive and reversible characteristics, and its application to irreversible image degradation, in the form of blur, compression, and combination of all, remains a challenge. To address these problems, we propose blur, noise, and compression robust GAN (BNCR-GAN) that can learn a clean image generator directly from degraded images without knowledge of degradation parameters (e.g., blur kernel types, noise amounts, or quality factor values). Inspired by NR-GAN, BNCR-GAN uses a multiple-generator model composed of image, blur-kernel, noise, and quality-factor generators. However, in contrast to NR-GAN, to address irreversible characteristics, we introduce masking architectures adjusting degradation strength values in a data-driven manner using bypasses before and after degradation. Furthermore, to suppress uncertainty caused by the combination of blur, noise, and compression, we introduce adaptive consistency losses imposing consistency between irreversible degradation processes according to the degradation strengths. We demonstrate the effectiveness of BNCR-GAN through large-scale comparative studies on CIFAR-10 and a generality analysis on FFHQ. In addition, we demonstrate the applicability of BNCR-GAN in image restoration.
Key ideas
Masking architectures
To solve the sub-problems, we first propose two variants: blur robust GAN (BR-GAN) and compression robust GAN (CR-GAN), which are specific to blur and compression, respectively. To address the irreversible blur/compression characteristics, masking architectures adapting degradation strengths in a data-driven are introduced, using bypasses before and after image degradation. This architectural constraint is useful for conducting only the necessary changes through blur or compression while suppressing unnecessary changes.
Adaptive consistency losses
The unique problem of BNCR-GAN, which is a unified model integrating BR-GAN, NR-GAN, and CR-GAN, is that it needs to handle the uncertainty caused by the combination of blur, noise, and compression. Thus, we incorporate novel losses called adaptive consistency losses that impose consistency between irreversible degradation processes according to the degradation strengths. This loss helps prevent the generated image from yielding unexpected artifacts, which can disappear and become unrecognizable after irreversible processes.
Example results
Examples of blur robust image generation
Examples of compression robust image generation
Examples of blur, noise, and compression robust image generation
Paper
Takuhiro Kaneko and Tatsuya Harada. |
Acknowledgment
This work was partially supported by JST AIP Acceleration Research Grant Number JPMJCR20U3, JST CREST Grant Number JPMJCR2015, and JSPS KAKENHI Grant Number JP19H01115.
References
[1]
A. Bora, E. Price, A. G. Dimakis.
AmbientGAN: Generative Models from Lossy Measurements.
In ICLR, 2018.
[2]
T. Kaneko, T. Harada.
Noise Robust Generative Adversarial Networks.
In CVPR, 2020.
[3]
T. Kaneko, Y. Ushiku, T. Harada.
Label-Noise Robust Generative Adversarial Networks.
In CVPR, 2019.
[4]
T. Kaneko, Y. Ushiku, T. Harada.
Class-Distinct and Class-Mutual Image Generation with GANs.
In BMVC, 2019.