Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

A Neural Tangent Kernel Perspective of GANs

Abstract : Theoretical analyses for Generative Adversarial Networks (GANs) generally assume an arbitrarily large family of discriminators and do not consider the characteristics of the architectures used in practice. We show that this framework of analysis is too simplistic to properly analyze GAN training. To tackle this issue, we leverage the theory of infinite-width neural networks to model neural discriminator training for a wide range of adversarial losses via its Neural Tangent Kernel (NTK). Our analytical results show that GAN trainability primarily depends on the discriminator's architecture. We further study the discriminator for specific architectures and losses, and highlight properties providing a new understanding of GAN training. For example, we find that GANs trained with the integral probability metric loss minimize the maximum mean discrepancy with the NTK as kernel. Our conclusions demonstrate the analysis opportunities provided by the proposed framework, which paves the way for better and more principled GAN models. We release a generic GAN analysis toolkit based on our framework that supports the empirical part of our study.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03254591
Contributor : Jean-Yves Franceschi <>
Submitted on : Tuesday, June 8, 2021 - 11:57:20 PM
Last modification on : Tuesday, July 13, 2021 - 3:27:36 AM

Files

gantk2.pdf
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

  • HAL Id : hal-03254591, version 1
  • ARXIV : 2106.05566

Citation

Jean-Yves Franceschi, Emmanuel de Bézenac, Ibrahim Ayed, Mickaël Chen, Sylvain Lamprier, et al.. A Neural Tangent Kernel Perspective of GANs. 2021. ⟨hal-03254591⟩

Share

Metrics

Record views

45

Files downloads

45