نویسندگان | Hussein Eliasi,,, |
---|---|
نشریه | Expert Systems with Applications |
شماره صفحات | 1-15 |
شماره سریال | 224 |
شماره مجلد | 1 |
نوع مقاله | Full Paper |
تاریخ انتشار | 2023 |
رتبه نشریه | ISI |
نوع نشریه | چاپی |
کشور محل چاپ | ایران |
نمایه نشریه | JCR،Scopus |
چکیده مقاله
Generative Adversarial Networks (GANs) are known to implicitly memorize details of sensitive data used to train them. To prevent privacy leakage, many approaches have been conducted. One of the most popular approaches is Differential Private Gradient Descent GANs (DPGD GANs), where the discriminator’s gradients are clipped, and an appropriate random noise is added to the clipped gradients. In this article, a theoretical analysis of DPGD GAN convergence behavior is presented, and the effect of the clipping and noise perturbation operators on convergence properties is examined. It is proved that if the clipping bound is too small, it leads to instability in the training procedure. Then, assuming that the simultaneous/alternating gradient descent method is locally convergent to a fixed point and its operator is L-Lipschitz with L < 1, the effect of noise perturbation on the last-iterate convergence rate is analyzed. Also, we show that parameters such as the privacy budget, the confidence parameter, the total number of training records, the clipping bound, the number of training iterations, and the learning rate, affect the convergence behavior of DPGD GANs. Furthermore, we confirm the effectiveness of these parameters on the convergence behavior of DPGD GANs through experimental evaluations.
tags: Differential Privacy, Generative Adversarial Network, Convergence, Gradient Descent