Label-Noise Robust Generative Adversarial Networks

Takuhiro Kaneko1    Yoshitaka Ushiku1    Tatsuya Harada1,2   
1The University of Tokyo    2RIKEN

CVPR 2019 (Oral)
[Paper] [Code] [Slides] [Poster] [Talk]

examples

Figure 1. Examples of label-noise robust conditional image generation. rGAN can learn a label-noise robust conditional generator that can generate an image conditioned on the clean label even when the noisy labeled images are only available for training.

Note: In our other studies, we have also proposed GAN for ambiguous labels, GAN for image noise, and GAN for blur, noise, and compression. Please check them from the links below.

Classifier's posterior GAN (CP-GAN) (BMVC 2019): GAN for ambiguous labels
Noise robust GAN (NR-GAN) (CVPR 2020): GAN for image noise
Blur, noise, and compression robust GAN (BNCR-GAN) (CVPR 2021): GAN for blur, noise, and compression

Abstract

Generative adversarial networks (GANs) are a framework that learns a generative distribution through adversarial training. Recently, their class-conditional extensions (e.g., conditional GAN (cGAN) and auxiliary classifier GAN (AC-GAN)) have attracted much attention owing to their ability to learn the disentangled representations and to improve the training stability. However, their training requires the availability of large-scale accurate class-labeled data, which are often laborious or impractical to collect in a real-world scenario. To remedy this, we propose a novel family of GANs called label-noise robust GANs (rGANs), which, by incorporating a noise transition model, can learn a clean label conditional generative distribution even when training labels are noisy. In particular, we propose two variants: rAC-GAN, which is a bridging model between AC-GAN and the label-noise robust classification model, and rcGAN, which is an extension of cGAN and solves this problem with no reliance on any classifier. In addition to providing the theoretical background, we demonstrate the effectiveness of our models through extensive experiments using diverse GAN configurations, various noise settings, and multiple evaluation metrics (in which we tested 402 conditions in total).

Paper

paper thumbnail      

[Paper]
arXiv:1811.11165
Nov. 2018.

[Slides] [Poster] [Talk]

Citation

Takuhiro Kaneko, Yoshitaka Ushiku, and Tatsuya Harada.
Label-Noise Robust Generative Adversarial Networks. In CVPR, 2019.
[BibTex]

Code

[PyTorch]

Talk

Overview

Our task is, when given noisy labeled data, to construct a label-noise robust conditional generator that can generate an image conditioned on the clean label rather than conditioned on the noisy label. Our main idea for solving this problem is to incorporate a noise transition model (viewed as orange rectangles in Figures 2(b) and (d); which represents a probability that a clean label is corrupted to a noisy label) into typical class-conditional GANs. In particular, we develop two variants: rAC-GAN (Figure 2(b)) and rcGAN (Figure 2(d)) that are extensions of AC-GAN [1] (Figure 2(a)) and cGAN [2] [3] (Figure 2(c)), respectively.

examples

Figure 2. Comparison of standard conditional GANs and label-noise robust GANs. We denote the discriminator and auxiliary classifier by D and C, respectively. In our rAC-GAN (b) and rcGAN (d), we incorporate a noise transition model (viewed as an orange rectangle) into AC-GAN (a) and cGAN (c), respectively.

Examples of generated images

CIFAR-10 (symmetric noise with a noise rate of 0.5)

comparison between AC-CT-GAN and rAC-CT-GAN

comparison between cSN-GAN and rcSN-GAN

Figure 3. Generated image samples on CIFAR-10 (symmetric noise with a noise rate of 0.5). In each picture block, each column shows samples associated with the same class: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck, respectively, from left to right. Each row includes samples generated from a fixed z and a varied yg.

Acknowledgment

We would like to thank Hiroharu Kato, Yusuke Mukuta, and Mikihiro Tanaka for helpful discussions. This work was supported by JSPS KAKENHI Grant Number JP17H06100, partially supported by JST CREST Grant Number JPMJCR1403, Japan, and partially supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) as "Seminal Issue on Post-K Computer."

Related work

Note: Kiran Koshy Thekumparampil, Ashish Khetan, Zinan Lin, and Sewoong Oh published a paper [4] independently from us on the same problem. They use similar ideas as our work, albeit with a different architecture. You should also check out their awesome work at https://arxiv.org/abs/1811.03205.

[1] A. Odena, C. Olah, and J. Shlens. Conditional Image Synthesis with Auxiliary Classifier GANs. In ICML, 2017.
[2] M. Mirza and S. Osindero. Conditional Generative Adversarial Nets. arXiv preprint arXiv:1411.1784, 2014.
[3] T. Miyato and M. Koyama. cGANs with Projection Discriminator. In ICLR, 2018.
[4] K. K. Thekumparampil, A. Khetan, Z. Lin, and S. Oh. Robustness of Conditional GANs to Noisy Labels. In NeurIPS, 2018.
[5] T. Kaneko, Y. Ushiku, and T. Harada. Class-Distinct and Class-Mutual Image Generation with GANs. In BMVC, 2019.
[6] T. Kaneko and T. Harada. Noise Robust Generative Adversarial Networks. In CVPR, 2020.
[7] T. Kaneko and T. Harada. Blur, Noise, and Compression Robust Generative Adversarial Networks. In CVPR, 2021.