A novel weight normalization technique to improve Generative Adversarial Network training

Authors

  • Sarbaree Mishra Program Manager at Molina Healthcare Inc., USA Author

Keywords:

Weight Normalization, Generative Adversarial Networks

Abstract

Generative Adversarial Networks (GANs) have emerged as a groundbreaking framework for generating realistic data across various domains. Yet, their training still needs to be more manageable due to mode collapse and instability. This paper introduces a novel weight normalization technique designed to enhance the training process of GANs by improving convergence rates and overall model performance. Traditional approaches often rely on simple weight scaling or standard normalization methods that may not fully address the unique challenges posed by the adversarial training dynamic. Our proposed technique applies a more tailored normalization strategy that adapts to the evolving distribution of weights during training, ensuring more consistent gradient flow and better representational capacity. Through extensive experimentation, we demonstrate that our weight normalization approach significantly reduces the variance in generated samples, leading to higher fidelity outputs and a more stable training process. We also provide a comprehensive analysis of the impact of weight normalization on both the generator and discriminator networks, highlighting its effectiveness in mitigating common pitfalls associated with GAN training. Our findings suggest that integrating this novel technique enhances the quality of generated samples and facilitates a smoother training experience, making it easier for practitioners to deploy GANs in real-world applications. This work contributes to the ongoing efforts to refine GAN architectures and training methodologies, offering a promising avenue for further research in generative modeling. By presenting a fresh perspective on weight normalization, we aim to inspire subsequent advancements in the field, ultimately broadening the scope and applicability of GANs across various industries.

Downloads

Download data is not yet available.

References

Salimans, T., & Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Advances in neural information processing systems, 29.

Roth, K., Lucchi, A., Nowozin, S., & Hofmann, T. (2017). Stabilizing training of

generative adversarial networks through regularization. Advances in neural information processing systems, 30.

Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. Advances in neural information

processing systems, 29.

Kadurin, A., Nikolenko, S., Khrabrov, K., Aliper, A., & Zhavoronkov, A. (2017).

druGAN: an advanced generative adversarial autoencoder model for de novo generation of new molecules with desired molecular properties in silico. Molecular pharmaceutics, 14(9), 3098-3104.

Ba, J. L. (2016). Layer normalization. arXiv preprint arXiv:1607.06450.

Goodfellow, I. (2016). Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160.

Hayes, J., Melis, L., Danezis, G., & De Cristofaro, E. (2017). Logan: Evaluating information leakage of generative models using generative adversarial networks. arXiv preprint arXiv:1705.07663, 18.

Brock, A., Lim, T., Ritchie, J. M., & Weston, N. (2016). Neural photo editing with introspective adversarial networks. arXiv preprint arXiv:1609.07093.

Yang, G., Yu, S., Dong, H., Slabaugh, G., Dragotti, P. L., Ye, X., ... & Firmin, D.

(2017). DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE transactions on medical imaging, 37(6), 1310-1321.

Li, C., & Wand, M. (2016). Precomputed real-time texture synthesis with markovian generative adversarial networks. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14 (pp. 702-716). Springer International Publishing.

Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., & Krishnan, D. (2017). Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3722-3731).

Jetchev, N., Bergmann, U., & Vollgraf, R. (2016). Texture synthesis with spatial

generative adversarial networks. arXiv preprint arXiv:1611.08207.

Mahapatra, D., & Bozorgtabar, B. (2017). Retinal vasculature segmentation using local saliency maps and generative adversarial networks for image super resolution. arXiv preprint arXiv:1710.04783.

Mun, S., Park, S., Han, D. K., & Ko, H. (2017, September). Generative Adversarial Network Based Acoustic Scene Training Set Augmentation and Selection Using SVM Hyper-Plane. In DCASE (pp. 93-102).

Wang, D., & Liu, Q. (2016). Learning to draw samples: With application to amortized mle for generative adversarial learning. arXiv preprint

arXiv:1611.01722.

Gade, K. R. (2018). Real-Time Analytics: Challenges and Opportunities. Innovative Computer Sciences Journal, 4(1).

Komandla, V. Transforming Financial Interactions: Best Practices for Mobile Banking App Design and Functionality to Boost User Engagement and Satisfaction.

Gade, K. R. (2017). Integrations: ETL/ELT, Data Integration Challenges, Integration Patterns. Innovative Computer Sciences Journal, 3(1).

Gade, K. R. (2017). Migrations: Challenges and Best Practices for Migrating Legacy Systems to Cloud-Based Platforms. Innovative Computer Sciences Journal, 3(1).

Downloads

Published

24-09-2019

How to Cite

[1]
Sarbaree Mishra, “A novel weight normalization technique to improve Generative Adversarial Network training”, Distrib Learn Broad Appl Sci Res, vol. 5, Sep. 2019, Accessed: Dec. 24, 2024. [Online]. Available: https://dlabi.org/index.php/journal/article/view/243

Most read articles by the same author(s)

Similar Articles

71-80 of 178

You may also start an advanced similarity search for this article.