Improved Data Poison Detection using Multiple Training Models

Authors

  • Ch. Swathi Assistant Professor, Department of CSE (AI & ML), Sri Vasavi Institute of Engineering and Technology, Nandamuru, Andhra Pradesh, India Author
  • Rakshitha Tirumalasetty UG Student, Department of CSE (AI & ML), Sri Vasavi Institute of Engineering and Technology, Nandamuru, Andhra Pradesh, India Author
  • Karicharla Kavitha UG Student, Department of CSE (AI & ML), Sri Vasavi Institute of Engineering and Technology, Nandamuru, Andhra Pradesh, India Author
  • Bejawada Gana Maruthi UG Student, Department of Electronics and Communication Engineering, SV College of Engineering (SVCE), Tirupati, A.P. India Author
  • Nandeti Teja Sai Singh UG Student, Department of Electronics and Communication Engineering, SV College of Engineering (SVCE), Tirupati, A.P. India Author

Keywords:

Distributed Machine Learning

Abstract

Distributed machine learning (DML) can realize massive dataset training when no single node can work out the accurate results within an acceptable time. However, this will inevitably expose more potential targets to attackers compared with the non-distributed environment. In this paper, we classify DML into basic-DML and semi-DML. In basic-DML, the center server dispatches learning tasks to distributed machines and aggregates their learning results. While in semi-DML, the center server further devotes resources into dataset learning in addition to its duty in basic-DML. We firstly put forward a novel data poison detection scheme for basic-DML, which utilizes a cross-learning mechanism to find out the poisoned data. Then, for semi-DML, we present an improved data poison detection scheme to provide better learning protection with the aid of the central resource. To efficiently utilize the system resources, an optimal resource allocation approach is developed. Simulation results show that the proposed scheme can significantly improve the accuracy of the final model by up to 20% for support vector machine and 60% for logistic regression in the basic-DML scenario.

Downloads

Download data is not yet available.

References

Jason Brownlee, “What is Deep Learning?” August 16, 2019, https://machinelearningmastery.com/what-is-deep-learning/

Mathworks, “What Is Deep Learning?” https://www.mathworks.com/di scovery/deeplearning.html

Computer Science, University of Maryland, “Poison Frogs! Targeted Poisoning Attacks on Neural Networks,” https://www.cs.umd.edu/∼tom g/projects/poison/

Keith D. Foote, “A Brief History of Deep Learning,” Feburary 7, 2017, https://www.dataversity.net/brief-history-deep-learning/

Reportlinker, “Global Deeping Learning Industry,” July 2020, https://www.reportlinker.com/p05798338/Global-Deep-Learning-Industry.h tml?utm source=GNW

Larry Hardesty, “Explained: Neural networks,” April 14, 2017, https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414

Jeff Dean, “Large-Scale Deep Learning for Intelligent Computer Systems,”https://static.googleusercontent.com/media/research.google.com/en//people/jeff/BayLearn2015.pdf

DeepAI, “Feature Extraction,” https://deepai.org/machine-learning-glos sary-and-terms/featureextraction

Artem Oppermann, “Artificial Intelligence vs. Machine Learning vs. Deep Learning,” October 29, 2019, https://towardsdatascience.com/artif icial-intelligence-vs-machine-learning-vs-deeplearning-2210ba8cc4ac

Alexander Polyakov, “How to attack Machine Learning (Evasion, Poisoning, Inference, Trojans, Backdoors),” August 6, 2019, https://toward sdatascience.com/how-to-attack-machine-learningevasion-poisoninginference-trojans-backdoors-a7cb5832595c

Ilja Moisejevs, “Poisoning attacks on Machine Learning,” July 14, 2019, https://towardsdatascience.com/poisoning-attacks-on-machine-learning -1ff247c254db

Daniel Lowd, Christopher Meek, “Good Word Attacks on Statistical Spam Filters,” Semantic Scholar, https://www.semanticscholar.org/pape r/Good-Word-Attacks-on-Statistical-Spam-FiltersLowd-Meek/16358a 75a3a6561d042e6874d128d82f5b0bd4b3

Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, Tom Goldstein, “Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks,” in Proceedings of 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal, Canad, https://papers.nips.cc/paper/7849-po ´ ison-frogs-targeted-clean-labelpoisoning-attacks-on-neural-networks.p df

Luis Munoz-Gonz ˜ alez, Battista Biggio, Ambra Demontis, Andrea Pau- ´ dice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli, “Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization,” August 29, 2017, https://arxiv.org/abs/1708.08689

Keras, https://keras.io/

TensorFlow, https://www.tensorflow.org

Downloads

Published

22-03-2024

Issue

Section

Research Articles

How to Cite

[1]
Ch. Swathi, Rakshitha Tirumalasetty, Karicharla Kavitha, Bejawada Gana Maruthi, and Nandeti Teja Sai Singh, “Improved Data Poison Detection using Multiple Training Models”, Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol, vol. 10, no. 2, pp. 705–712, Mar. 2024, Accessed: May 09, 2024. [Online]. Available: http://ijsrcseit.com/index.php/home/article/view/CSEIT24102101

Similar Articles

1-10 of 59

You may also start an advanced similarity search for this article.