"Maximum Mean Discrepancy for Training Generative Adversarial Networks" (TODAY at the statistics seminar)
Attention conservation notice: Last-minute notice of a technical talk in a city you don't live in. Only of interest if you (1) care actor/critic or co-training methods for fitting generative models, and (2) have free time in Pittsburgh this afternoon.
I have been remiss in blogging the statistics department's seminars for
the new academic year. So let me try to rectify that:
- Arthur Gretton, "The Maximum Mean Discrepancy for Training Generative Adversarial Networks"
- Abstract: Generative adversarial networks (GANs) use neural networks as generative models, creating realistic samples that mimic real-life reference samples (for instance, images of faces, bedrooms, and more). These networks require an adaptive critic function while training, to teach the networks how to move improve their samples to better match the reference data. I will describe a kernel divergence measure, the maximum mean discrepancy, which represents one such critic function. With gradient regularisation, the MMD is used to obtain current state-of-the art performance on challenging image generation tasks, including 160 × 160 CelebA and 64 × 64 ImageNet. In addition to adversarial network training, I'll discuss issues of gradient bias for GANs based on integral probability metrics, and mechanisms for benchmarking GAN performance.
- Time and place: 4:00--5:00 pm on Monday, 24 September 2018, in the Mellon Auditorium (room A35), Posner Hall, Carnegie Mellon University
As always, talks are free and open to the public.
Enigmas of Chance
Posted at September 24, 2018 09:23 | permanent link