On the local convergence of GANs with differential Privacy: Gradient clipping and noise perturbation

AuthorsHussein Eliasi,,,
JournalExpert Systems with Applications
Page number1-15
Serial number224
Volume number1
Paper TypeFull Paper
Published At2023
Journal GradeISI
Journal TypeTypographic
Journal CountryIran, Islamic Republic Of
Journal IndexJCR،Scopus

Abstract

Generative Adversarial Networks (GANs) are known to implicitly memorize details of sensitive data used to train them. To prevent privacy leakage, many approaches have been conducted. One of the most popular approaches is Differential Private Gradient Descent GANs (DPGD GANs), where the discriminator’s gradients are clipped, and an appropriate random noise is added to the clipped gradients. In this article, a theoretical analysis of DPGD GAN convergence behavior is presented, and the effect of the clipping and noise perturbation operators on convergence properties is examined. It is proved that if the clipping bound is too small, it leads to instability in the training procedure. Then, assuming that the simultaneous/alternating gradient descent method is locally convergent to a fixed point and its operator is L-Lipschitz with L < 1, the effect of noise perturbation on the last-iterate convergence rate is analyzed. Also, we show that parameters such as the privacy budget, the confidence parameter, the total number of training records, the clipping bound, the number of training iterations, and the learning rate, affect the convergence behavior of DPGD GANs. Furthermore, we confirm the effectiveness of these parameters on the convergence behavior of DPGD GANs through experimental evaluations.

Paper URL

tags: Differential Privacy, Generative Adversarial Network, Convergence, Gradient Descent