Adversarial Attacks and Defense in Neural Networks: Exploring adversarial attacks and defense mechanisms in neural networks to enhance robustness against malicious perturbations

Authors

  • Dr. Jean-Pierre Berger Associate Professor of Artificial Intelligence, Université Claude Bernard Lyon 1, France Author

Keywords:

Adversarial Attacks, Neural Networks, Robustness

Abstract

Adversarial attacks pose a significant threat to the deployment of neural networks in critical applications. These attacks manipulate input data with imperceptible perturbations, leading to misclassification by the model. In response, various defense mechanisms have been proposed to enhance the robustness of neural networks against such attacks. This paper provides an overview of adversarial attacks and explores defense strategies, focusing on their effectiveness and limitations. We also discuss the challenges and future directions in this field.

Downloads

Download data is not yet available.

Downloads

Published

20-09-2022

How to Cite

[1]
Dr. Jean-Pierre Berger, “Adversarial Attacks and Defense in Neural Networks: Exploring adversarial attacks and defense mechanisms in neural networks to enhance robustness against malicious perturbations”, Distrib Learn Broad Appl Sci Res, vol. 8, pp. 1–10, Sep. 2022, Accessed: Jul. 03, 2024. [Online]. Available: https://dlabi.org/index.php/journal/article/view/22

Similar Articles

You may also start an advanced similarity search for this article.