Noise Robust Generative Adversarial Networks

Takuhiro Kaneko1    Tatsuya Harada1,2   
1The University of Tokyo    2RIKEN

CVPR 2020
[Paper] [Code] [Slides] [Video]

Examples

Figure 1. Examples of noise robust image generation. Standard GAN (b)(e) replicates images faithfully even when training images are noisy (a)(d). In contrast, NR-GAN can learn to generate clean images (c)(f) even when the same noisy images (a)(d) are used for training.

Note: In our previous studies, we have also proposed GAN for label noise and GAN for ambiguous labels. In our follow-up study, we have also proposed GAN for blur, noise, and compression. Please check them from the links below.

GAN for label noise: Label-noise robust GAN (rGAN) (CVPR 2019)
GAN for ambiguous labels: Classifier's posterior GAN (CP-GAN) (BMVC 2019)
GAN for blur, noise, and compression: Blur, noise, and compression robust GAN (BNCR-GAN) (CVPR 2021)

Abstract

Generative adversarial networks (GANs) are neural networks that learn data distributions through adversarial training. In intensive studies, recent GANs have shown promising results for reproducing training images. However, in spite of noise, they reproduce images with fidelity. As an alternative, we propose a novel family of GANs called noise robust GANs (NR-GANs), which can learn a clean image generator even when training images are noisy. In particular, NR-GANs can solve this problem without having complete noise information (e.g., the noise distribution type, noise amount, or signal-noise relationship). To achieve this, we introduce a noise generator and train it along with a clean image generator. However, without any constraints, there is no incentive to generate an image and noise separately. Therefore, we propose distribution and transformation constraints that encourage the noise generator to capture only the noise-specific components. In particular, considering such constraints under different assumptions, we devise two variants of NR-GANs for signal-independent noise and three variants of NR-GANs for signal-dependent noise. On three benchmark datasets, we demonstrate the effectiveness of NR-GANs in noise robust image generation. Furthermore, we show the applicability of NR-GANs in image denoising.

Paper

paper thumbnail      

[Paper]
arXiv:1911.11776
Nov. 2019.

[Slides] [Video]

Citation

Takuhiro Kaneko and Tatsuya Harada.
Noise Robust Generative Adversarial Networks. In CVPR, 2020.
[BibTex]

Code

[PyTorch]

Video

Examples of generated images

LSUN Bedroom with signal-independent noises

Examples of generated images on LSUN Bedroom with signal-independent noises

Figure 2. Examples of generated images on LSUN Bedroom with signal-independent noises. AmbientGAN is trained with the ground-truth noise model, while the other models are trained without full knowledge of the noise (i.e., the noise distribution type and noise amount).
LSUN Bedroom with signal-dependent noises

Examples of generated images on LSUN Bedroom with signal-dependent noises

Figure 3. Examples of generated images on LSUN Bedroom with signal-dependent noises. AmbientGAN is trained with the ground-truth noise model, while the other models are trained without full knowledge of the noise (i.e., the noise distribution type, noise amount, and signal-noise relationship).

Acknowledgment

We would like to thank Naoya Fushishita, Takayuki Hara, and Atsuhiro Noguchi for helpful discussions. This work was partially supported by JST CREST Grant Number JPMJCR1403, and partially supported by JSPS KAKENHI Grant Number JP19H01115.

Related work

[1] A. Bora, E. Price, A. G. Dimakis. AmbientGAN: Generative Models from Lossy Measurements. In ICLR, 2018.
[2] T. Kaneko, Y. Ushiku, T. Harada. Label-Noise Robust Generative Adversarial Networks. In CVPR, 2019.
[3] T. Kaneko, Y. Ushiku, T. Harada. Class-Distinct and Class-Mutual Image Generation with GANs. In BMVC, 2019.
[4] T. Kaneko, T. Harada. Blur, Noise, and Compression Robust Generative Adversarial Networks. In CVPR, 2021.