On the local convergence of GANs with differential Privacy: Gradient clipping and noise perturbation

نویسندگانHussein Eliasi,,,
نشریهExpert Systems with Applications
شماره صفحات1-15
شماره سریال224
شماره مجلد1
نوع مقالهFull Paper
تاریخ انتشار2023
رتبه نشریهISI
نوع نشریهچاپی
کشور محل چاپایران
نمایه نشریهJCR،Scopus

چکیده مقاله

Generative Adversarial Networks (GANs) are known to implicitly memorize details of sensitive data used to train them. To prevent privacy leakage, many approaches have been conducted. One of the most popular approaches is Differential Private Gradient Descent GANs (DPGD GANs), where the discriminator’s gradients are clipped, and an appropriate random noise is added to the clipped gradients. In this article, a theoretical analysis of DPGD GAN convergence behavior is presented, and the effect of the clipping and noise perturbation operators on convergence properties is examined. It is proved that if the clipping bound is too small, it leads to instability in the training procedure. Then, assuming that the simultaneous/alternating gradient descent method is locally convergent to a fixed point and its operator is L-Lipschitz with L < 1, the effect of noise perturbation on the last-iterate convergence rate is analyzed. Also, we show that parameters such as the privacy budget, the confidence parameter, the total number of training records, the clipping bound, the number of training iterations, and the learning rate, affect the convergence behavior of DPGD GANs. Furthermore, we confirm the effectiveness of these parameters on the convergence behavior of DPGD GANs through experimental evaluations.

لینک ثابت مقاله

tags: Differential Privacy, Generative Adversarial Network, Convergence, Gradient Descent