Membership Inference Attacks on Machine Learning Model

Authors

  • Preeti  Research(M.TECH) Scholar (CSE), Shekhawati Institute of Engineering and Technology, Sikar, Rajasthan, India
  • Irfan Khan  Assistant Professor(CSE), Shekhawati Institute of Engineering and Technology, Sikar, Rajasthan, India

DOI:

https://doi.org//10.32628/CSEIT22856

Keywords:

Membership inference attacks, deep leaning, privacy risk, differential privacy, FDR, FS, Dataset, Train, Test, Attack, Genetic Algorithm.

Abstract

Machine learning(ML) models today are vulnerable to several types of attacks. In this work, we will study a category of attack known as membership inference attack and show how ML models are susceptible to leaking secure information under such attacks. Given a data record and a black box access to a ML model, we present a framework to deduce whether the data record was part of the model’s training dataset or not. We achieve this objective by creating an attack ML model which learns to differentiate the target model’s predictions on its training data from target model’s predictions on data not part of its training data. In other words, we solve this membership inference problem by converting it into a binary classification problem. We also study mitigation strategies to defend the ML models against the attacks discussed in this work. In this paper evaluation method on real world datasets: (1) CIFAR-10 and (2) UCI Adult (Census Income) using classification as the task performed by the target ML models built on these datasets.

References

  1. Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang, Deep learning with differential pri vacy, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 308–318.
  2. Giuseppe Ateniese, Luigi V Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici, Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers, Inter national Journal of Security and Networks 10 (2015), no. 3, 137–150.
  3. Jordan Awan, Ana Kenney, Matthew Reimherr, and Aleksandra Slavkovi´c, Benefits and pitfalls of the exponential mechanism with applications to hilbert spaces and functional PCA, 2019.
  4. Michael Backes, Pascal Berrang, Mathias Humbert, and Praveen Manoha ran, Membership privacy in MicroRNA-based studies, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 319–330.
  5. Raef Bassily, Adam Smith, and Abhradeep Thakurta, Private empirical risk minimization: Efficient algorithms and tight error bounds, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, IEEE, 2014, pp. 464– 473.
  6. Raphael Bost, Raluca Ada Popa, Stephen Tu, and Shafi Goldwasser, Machine learning classification over encrypted data., NDSS, vol. 4324, 2015, p. 4325.
  7. Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu, A limited mem ory algorithm for bound constrained optimization, SIAM Journal on scientific computing 16 (1995), no. 5, 1190–1208.
  8. Nicholas Carlini and David Wagner, Towards evaluating the robustness of neu ral networks, 2017 ieee symposium on security and privacy (sp), IEEE, 2017, pp. 39–57.
  9. Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate, Differentially private empirical risk minimization, Journal of Machine Learning Research 12 (2011), no. 29, 1069–1109.
  10. Dingfan Chen, Ning Yu, Yang Zhang, and Mario Fritz, GAN-Leaks: A taxon omy of membership inference attacks against generative models, Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Se curity (New York, NY, USA), CCS ’20, Association for Computing Machinery, 2020, p. 343–362.
  11. James S Cramer, The origins and development of the logit model, Logit models from economics and other fields 2003 (2003), 1–19.
  12. Sander Dieleman, Jan Schl¨uter, Colin Raffel, Eben Olson, Søren Kaae Sønderby, Daniel Nouri, Daniel Maturana, Martin Thoma, Eric Battenberg, Jack Kelly, Jeffrey De Fauw, Michael Heilman, Diogo Moitinho de Almeida, Brian McFee, Hendrik Weideman, G´abor Tak´acs, Peter de Rivaz, Jon Crall, Gregory Sanders, Kashif Rasul, Cong Liu, Geoffrey French, and Jonas Degrave, Lasagne: First release., August 2015.
  13. Irit Dinur and Kobbi Nissim, Revealing information while preserving privacy, Proceedings of the Twenty-Second ACM SIGMOD-SIGACT-SIGART Sympo sium on Principles of Database Systems (New York, NY, USA), PODS ’03, Association for Computing Machinery, 2003, p. 202–210.
  14. Pedro Domingos, A few useful things to know about machine learning, Com mun. ACM 55 (2012), no. 10, 78–87.
  15. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville, Adversarially learned inference, 2017.Adi, E 2012, „A design of a proxy inspired from human immune system to detect SQL injection and cross-site scripting‟, Procedia Engineering, vol. 50, pp. 19–28.

Downloads

Published

2022-10-30

Issue

Section

Research Articles

How to Cite

[1]
Preeti, Irfan Khan, " Membership Inference Attacks on Machine Learning Model, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 8, Issue 5, pp.31-38, September-October-2022. Available at doi : https://doi.org/10.32628/CSEIT22856