Security Challenges and Mitigation Strategies in Generative AI Systems

Authors

  • Satya Naga Mallika Pothukuchi University College for Women, Hyderabad, India Author

DOI:

https://doi.org/10.32628/CSEIT25112377

Keywords:

Generative AI Security, Adversarial Attacks, Model Protection, Privacy Preservation, Security Framework Integration

Abstract

This article examines the critical security challenges and mitigation strategies in generative AI systems. The article explores how these systems have transformed various sectors, particularly in financial markets and critical infrastructure, while introducing significant security concerns. The article analyzes various types of adversarial attacks, including input perturbation and backdoor attacks, and their impact on AI model performance. Additionally, it investigates model stealing threats and data privacy concerns in AI deployments. The article presents comprehensive mitigation strategies, including advanced defense mechanisms, enhanced protection frameworks, and secure access control implementations. The article findings demonstrate the effectiveness of integrated security approaches in protecting AI systems while maintaining operational efficiency. This article contributes to the growing body of knowledge on AI security by providing evidence-based strategies for protecting generative AI systems across different application domains.

Downloads

Download data is not yet available.

References

Yuning Liu, Junliang Wang, “Analysis of Financial Market using Generative Artificial Intelligence,” May 2024, Available: https://www.researchgate.net/publication/381543167_Analysis_of_Financial_Market_using_Generative_Artificial_Intelligence

Zhen Ling Teo, et al, “Cybersecurity in the generative artificial intelligence era,” Asia-Pacific Journal of Ophthalmology, Volume 13, Issue 4, July–August 2024, 100091, Available: https://www.sciencedirect.com/science/article/pii/S2162098924000926

Syed Quiser Ahmed, et al, “A Comprehensive Review of Adversarial Attacks on Machine Learning,” December 2024, Available: https://www.researchgate.net/publication/387105441_A_Comprehensive_Review_of_Adversarial_Attacks_on_Machine_Learning

Yun Shen; Xinlei He; Yufei Han; Yang Zhang, “Model Stealing Attacks Against Inductive Graph Neural Networks,” Publisher: IEEE,2022, Available: https://ieeexplore.ieee.org/document/9833607

Sara Kaviani, Samaneh Shamshiri, Insoo Sohn, “A defense method against backdoor attacks on neural networks,” Expert Systems with Applications, Volume 213, Part B, 1 March 2023, 118990, Available: https://www.sciencedirect.com/science/article/abs/pii/S0957417422020085

Yifan Yao, et al, “A survey on large language model (LLM) security and privacy: The Good, The Bad, and The Ugly,” High-Confidence Computing, Volume 4, Issue 2, June 2024, 100211, Available: https://www.sciencedirect.com/science/article/pii/S266729522400014X

Kui Ren, et al, “Adversarial Attacks and Defenses in Deep Learning,” Engineering, Volume 6, Issue 3, March 2020, Pages 346-360, Available: https://www.sciencedirect.com/science/article/pii/S209580991930503X

Lalli Myllyaho, et al, “Systematic literature review of validation methods for AI systems,” Journal of Systems and Software, Volume 181, November 2021, 111050, Available: https://www.sciencedirect.com/science/article/pii/S0164121221001473

Twinkle Tyagi, Amit Kumar Singh, “Deep learning models security: A systematic review,” Computers and Electrical Engineering, Volume 120, Part B, December 2024, 109792, Available: https://www.sciencedirect.com/science/article/abs/pii/S0045790624007195#:~:text=Watermarking%2C%20fingerprinting%20and%20encryption%20provides,model%2C%20as%20depicted%20in%20Fig.

Tolamise Olasehinde, “Real-Time Threat Detection with AI: Enhancing Responsiveness and Cybersecurity Through Continuous Monitoring,” July 2023, Available: https://www.researchgate.net/publication/385746268_Real-Time_Threat_Detection_with_AI_Enhancing_Responsiveness_and_Cybersecurity_Through_Continuous_Monitoring

Downloads

Published

05-03-2025

Issue

Section

Research Articles