Note: In our other studies, we have also proposed GAN for ambiguous labels, GAN for image noise, and GAN for blur, noise, and compression. Please check them from the links below.
Classifier's posterior GAN (CP-GAN) (BMVC 2019):
GAN for ambiguous labels
Noise robust GAN (NR-GAN) (CVPR 2020): GAN for image noise
Blur, noise, and compression robust GAN (BNCR-GAN) (CVPR 2021): GAN for blur, noise, and compression
Generative adversarial networks (GANs) are a framework that learns a generative distribution through adversarial training. Recently, their class-conditional extensions (e.g., conditional GAN (cGAN) and auxiliary classifier GAN (AC-GAN)) have attracted much attention owing to their ability to learn the disentangled representations and to improve the training stability. However, their training requires the availability of large-scale accurate class-labeled data, which are often laborious or impractical to collect in a real-world scenario. To remedy this, we propose a novel family of GANs called label-noise robust GANs (rGANs), which, by incorporating a noise transition model, can learn a clean label conditional generative distribution even when training labels are noisy. In particular, we propose two variants: rAC-GAN, which is a bridging model between AC-GAN and the label-noise robust classification model, and rcGAN, which is an extension of cGAN and solves this problem with no reliance on any classifier. In addition to providing the theoretical background, we demonstrate the effectiveness of our models through extensive experiments using diverse GAN configurations, various noise settings, and multiple evaluation metrics (in which we tested 402 conditions in total).
Takuhiro Kaneko, Yoshitaka Ushiku, and Tatsuya Harada.
Label-Noise Robust Generative Adversarial Networks. In CVPR, 2019.
Our task is, when given noisy labeled data, to construct a label-noise robust conditional generator that can generate an image conditioned on the clean label rather than conditioned on the noisy label. Our main idea for solving this problem is to incorporate a noise transition model (viewed as orange rectangles in Figures 2(b) and (d); which represents a probability that a clean label is corrupted to a noisy label) into typical class-conditional GANs. In particular, we develop two variants: rAC-GAN (Figure 2(b)) and rcGAN (Figure 2(d)) that are extensions of AC-GAN  (Figure 2(a)) and cGAN   (Figure 2(c)), respectively.
Examples of generated images
CIFAR-10 (symmetric noise with a noise rate of 0.5)
We would like to thank Hiroharu Kato, Yusuke Mukuta, and Mikihiro Tanaka for helpful discussions. This work was supported by JSPS KAKENHI Grant Number JP17H06100, partially supported by JST CREST Grant Number JPMJCR1403, Japan, and partially supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) as "Seminal Issue on Post-K Computer."
Note: Kiran Koshy Thekumparampil, Ashish Khetan, Zinan Lin, and Sewoong Oh published a paper  independently from us on the same problem. They use similar ideas as our work, albeit with a different architecture. You should also check out their awesome work at https://arxiv.org/abs/1811.03205.
A. Odena, C. Olah, and J. Shlens.
Conditional Image Synthesis with Auxiliary Classifier GANs.
In ICML, 2017.
 M. Mirza and S. Osindero. Conditional Generative Adversarial Nets. arXiv preprint arXiv:1411.1784, 2014.
 T. Miyato and M. Koyama. cGANs with Projection Discriminator. In ICLR, 2018.
 K. K. Thekumparampil, A. Khetan, Z. Lin, and S. Oh. Robustness of Conditional GANs to Noisy Labels. In NeurIPS, 2018.
 T. Kaneko, Y. Ushiku, and T. Harada. Class-Distinct and Class-Mutual Image Generation with GANs. In BMVC, 2019.
 T. Kaneko and T. Harada. Noise Robust Generative Adversarial Networks. In CVPR, 2020.
 T. Kaneko and T. Harada. Blur, Noise, and Compression Robust Generative Adversarial Networks. In CVPR, 2021.