Pixel Map Analysis Adversarial Attack Detection on Transfer Learning Model

Authors

  • Soni Kumari Research Scholar, Department of Computer Engineering, Sigma Institute of Engineering, Gujarat, India Author
  • Dr. Sheshang Degadwala Professor & Head of Department, Department of Computer Engineering, Sigma University, Gujarat, India Author

DOI:

https://doi.org/10.32628/CSEIT2410229

Keywords:

Adversarial Attacks, Transfer Learning, Pixel Map Analysis, Pre-Trained Models, Defense Mechanisms, Classification Performance, Benchmark Datasets

Abstract

Adversarial attacks pose a significant threat to the robustness and reliability of deep learning models, particularly in the context of transfer learning where pre-trained models are widely used. In this research, we propose a novel approach for detecting adversarial attacks on transfer learning models using pixel map analysis. By analyzing changes in pixel values at a granular level, our method aims to uncover subtle manipulations that are often overlooked by traditional detection techniques. We demonstrate the effectiveness of our approach through extensive experiments on various benchmark datasets, showcasing its ability to accurately detect adversarial attacks while maintaining high classification performance on clean data. Our findings highlight the importance of incorporating pixel map analysis into the defense mechanisms of transfer learning models to enhance their robustness against sophisticated adversarial threats.

Downloads

Download data is not yet available.

References

International Journal of Information Security, 2023, doi: 10.1007/s10207-023-00735-6. DOI: https://doi.org/10.1007/s10207-023-00735-6

X. Cui, “Targeting Image-Classification Model,” pp. 1–13, 2023.

M. Kim and J. Yun, “AEGuard: Image Feature-Based Independent Adversarial Example Detection Model,” Security and Communication Networks, vol. 2022, 2022, doi: 10.1155/2022/3440123. DOI: https://doi.org/10.1155/2022/3440123

P. Lorenz, M. Keuper, and J. Keuper, “Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial Detection,” pp. 27–38, 2023, doi: 10.5220/0011586500003417. DOI: https://doi.org/10.5220/0011586500003417

L. Shi, T. Liao, and J. He, “Defending Adversarial Attacks against DNN Image Classification Models by a Noise-Fusion Method,” Electronics (Switzerland), vol. 11, no. 12, 2022, doi: 10.3390/electronics11121814. DOI: https://doi.org/10.3390/electronics11121814

A. S. Almuflih, D. Vyas, V. V Kapdia, M. R. N. M. Qureshi, K. M. R. Qureshi, and E. A. Makkawi, “Novel exploit feature-map-based detection of adversarial attacks,” Applied Sciences, vol. 12, no. 10, p. 5161, 2022. DOI: https://doi.org/10.3390/app12105161

M. Khan et al., “Alpha Fusion Adversarial Attack Analysis Using Deep Learning,” Computer Systems Science and Engineering, vol. 46, no. 1, pp. 461–473, 2023, doi: 10.32604/csse.2023.029642. DOI: https://doi.org/10.32604/csse.2023.029642

N. Ghaffari Laleh et al., “Adversarial attacks and adversarial robustness in computational pathology,” Nature Communications, vol. 13, no. 1, pp. 1–10, 2022, doi: 10.1038/s41467-022-33266-0. DOI: https://doi.org/10.1038/s41467-022-33266-0

Y. Wang et al., “Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey,” pp. 1–46, 2023, [Online]. Available: http://arxiv.org/abs/2303.06302

H. Hirano, A. Minagi, and K. Takemoto, “Universal adversarial attacks on deep neural networks for medical image classification,” BMC Medical Imaging, vol. 21, no. 1, pp. 1–13, 2021, doi: 10.1186/s12880-020-00530-y. DOI: https://doi.org/10.1186/s12880-020-00530-y

A. Talk, F. Wikipedia, A. Wikipedia, and C. Wikipedia, “University of Science and Technology of China,” no. 6, p. 29201, 2001.

Y. Zheng and S. Velipasalar, “Part-Based Feature Squeezing To Detect Adversarial Examples in Person Re-Identification Networks,” Proceedings - International Conference on Image Processing, ICIP, vol. 2021-September, pp. 844–848, 2021, doi: 10.1109/ICIP42928.2021.9506511. DOI: https://doi.org/10.1109/ICIP42928.2021.9506511

B. Liang, H. Li, M. Su, X. Li, W. Shi, and X. Wang, “Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction,” IEEE Transactions on Dependable and Secure Computing, vol. 18, no. 1, pp. 72–85, 2021, doi: 10.1109/TDSC.2018.2874243. DOI: https://doi.org/10.1109/TDSC.2018.2874243

M. A. Ahmadi, R. Dianat, and H. Amirkhani, “An adversarial attack detection method in deep neural networks based on re-attacking approach,” pp. 10985–11014, 2021. DOI: https://doi.org/10.1007/s11042-020-10261-5

K. Ren, T. Zheng, Z. Qin, and X. Liu, “Adversarial Attacks and Defenses in Deep Learning,” Engineering, vol. 6, no. 3, pp. 346–360, 2020, doi: 10.1016/j.eng.2019.12.012. DOI: https://doi.org/10.1016/j.eng.2019.12.012

Downloads

Published

30-03-2024

Issue

Section

Research Articles

How to Cite

[1]
Soni Kumari and D. S. D. Degadwala, “Pixel Map Analysis Adversarial Attack Detection on Transfer Learning Model”, Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol, vol. 10, no. 2, pp. 350–357, Mar. 2024, doi: 10.32628/CSEIT2410229.

Most read articles by the same author(s)

Similar Articles

1-10 of 82

You may also start an advanced similarity search for this article.