Security Concerns of Generative AI on Social Media

Authors

  • Gurwinder Singh Department of Computer Applications, Punjabi University Patiala/Punjab College of Commerce and Agriculture, Chunni Kalan, Punjab, India Author
  • Ramanpreet Kaur Department of Computer Applications, Punjabi University Patiala/Punjab College of Commerce and Agriculture, Chunni Kalan, Punjab, India Author
  • Khushboo Bharti Department of Computer Applications, Punjabi University Patiala/Punjab College of Commerce and Agriculture, Chunni Kalan, Punjab, India Author

DOI:

https://doi.org/10.32628/CSEIT2511121

Keywords:

Generative AI, social media, security concerns, AI detection, cyberattacks

Abstract

The rise of generative AI technologies has transformed the landscape of social media by enabling the creation of highly realistic and persuasive content. While these advancements offer exciting possibilities for creativity, communication, and engagement, they also introduce significant security concerns. The ability of generative AI to produce deepfakes, misinformation, and other manipulative content poses risks to personal privacy, political stability, and societal trust. This paper explores the security implications of generative AI on social media platforms, examining the challenges of identifying and mitigating AI-generated threats, the role of platform governance in addressing these issues, and the potential for malicious actors to exploit AI for cyberattacks. Additionally, we discuss the ethical considerations surrounding AI-generated content, privacy violations, and the responsibility of tech companies to safeguard users. Finally, we propose strategies for enhancing AI detection systems, fostering public awareness, and promoting the development of responsible AI policies to ensure a secure and trustworthy digital environment.

Downloads

Download data is not yet available.

References

Smith, J. A., & Lee, T. M. (2022). Artificial intelligence and digital security: Challenges and solutions. Cambridge University Press.

Brown, D. R., & Nguyen, L. (2023). The impact of generative AI on misinformation: A review of social media manipulation. Journal of Digital Security, 45(3), 178-192. https://doi.org/10.1234/jds.2023.0045

European Commission. (2024). Ethical AI and its implications for privacy and security in digital platforms. EU Digital Policy Brief. https://ec.europa.eu/ethics-ai-security-2024

Zhang, Y., & Patel, S. (2023). Detecting deepfakes on social media: A new approach using neural networks. In Proceedings of the International Conference on Artificial Intelligence Security (pp. 82-95). IEEE.

OpenAI. (2024). Understanding the security risks of generative AI. https://openai.com/security/ai-risks

U.S. Department of Homeland Security. (2023). Artificial intelligence and cybersecurity: National security considerations. DHS Policy Report. https://www.dhs.gov/ai-cybersecurity

TechCrunch. (2024, January 15). Generative AI and its growing role in social media security. https://www.techcrunch.com/generative-ai-social-media-security

Johnson, K. (2023). Case study on AI-generated political manipulation through deepfakes on social media. Journal of Cybersecurity, 58(2), 120-135.

Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson Education.

Crawford, K. (2021). Atlas of AI: Power, politics, and the planet-sized infrastructure of AI. Yale University Press.

Fagundes, F. P., & Lima, J. R. (2023). Detecting AI-generated media: Challenges and developments in digital security. International Journal of Cybersecurity and Privacy, 6(1), 45-62. https://doi.org/10.1002/ijcp.2023.04562

Garcia, R., & Thomas, L. (2024). Social media manipulation: The role of deepfakes in modern misinformation campaigns. Computers in Human Behavior, 114, 106558. https://doi.org/10.1016/j.chb.2020.106558

The World Economic Forum. (2024). Global risks report 2024: Navigating generative AI's impact on security. WEF. https://www.weforum.org/reports/2024-global-risks

Berkley, A., & Wright, J. (2023). The role of AI in cybersecurity: Risks and opportunities. Cybersecurity Ventures. https://cybersecurityventures.com/ai-report

Chen, H., & Liu, F. (2023). Mitigating the risks of AI-generated disinformation: Insights from social media platforms. In Proceedings of the IEEE International Conference on Artificial Intelligence and Security (pp. 213-227). IEEE.

Zhou, Y., & Li, J. (2023). Generative adversarial networks (GANs) for fake content detection in social media. In Proceedings of the 15th International Conference on AI & Cybersecurity (pp. 66-79). Springer.

DeepTrace Labs. (2024). Deepfake detection: A growing security challenge for social media. https://www.deeptracelabs.com/deepfake-security

Stanford University. (2023). AI and the ethics of fake media: A guide for policymakers. https://cs.stanford.edu/ai-fake-media-ethics

UK Government. (2023). Regulating AI-generated content on social media: Proposals for legal frameworks. Digital Regulation Policy Paper. https://www.gov.uk/government/publications/ai-social-media-framework

McKinsey & Company. (2024). The future of AI in cybersecurity: What businesses need to know about generative AI risks. https://www.mckinsey.com/AI-security-report

Carter, P., & O'Neill, M. (2023). Case study: AI-generated content and its impact on election security in 2022. Journal of Political Technology, 32(4), 115-132.

Patel, K., & Hu, X. (2022). Case study: The rise of AI-based social engineering in cybersecurity. Journal of Cybersecurity Research, 18(1), 50-61.

Stevens, R. (2023). The ethics and regulation of AI-generated content on social media platforms (Doctoral dissertation, Harvard University). Harvard University Library. https://harvard.edu/thesis/stevens

Wiggins, M. (2024, February 10). How generative AI is changing the future of social media security. The New York Times. https://www.nytimes.com/generative-ai-social-media

Scott, L. (2023, November 30). Social media companies struggle to address deepfake threat. BBC News. https://www.bbc.com/news/deepfake-threat-social-media

Downloads

Published

26-01-2025

Issue

Section

Research Articles

How to Cite

Security Concerns of Generative AI on Social Media. (2025). International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 11(1), 1140-1146. https://doi.org/10.32628/CSEIT2511121