PCIV method for Indirect Bias Quantification in AI and ML Models

Authors

  • Ashish Garg  CSIT, JAIN (Deemed University), Bengaluru, Karnataka, India
  • Dr. Rajesh SL  CSIT, JAIN (Deemed University), Bengaluru, Karnataka, India

DOI:

https://doi.org//10.32628/CSEIT217251

Keywords:

Artificial Intelligence, Machine Learning, Biased Model, AI Ethics, Fair AI, Computer Science

Abstract

Data Scientists nowadays make extensive use of black-box AI models (such as Neural Networks and the various ensemble techniques) to solve various business problems. Though these models often provide higher accuracy, these models are also less explanatory at the same time and hence more prone to bias. Further, AI systems rely upon the available training data and hence remain prone to data bias as well. Many sensitive attributes such as race, religion, gender, ethnicity, etc. can form the basis of unethical bias in data or the algorithm. As the world is becoming more and more dependent on AI algorithms for making a wide range of decisions such as to determine access to services such as credit, insurance, and employment, the fairness & ethical aspects of the models are becoming increasingly important. There are many bias detection & mitigation algorithms which have evolved and many of the algorithms handle indirect attributes as well without requiring to explicitly identify them. However, these algorithms have gaps and do not quantify the indirect bias. This paper discusses the various bias detection methodologies and various tools/ libraries to detect & mitigate bias. Thereafter, this paper presents a new methodical approach to detect and quantify indirect bias in an AI/ ML models.

References

  1. Adebayo 2016, “FairML: ToolBox for Diagnosing Bias in Predictive Modelling”, Massachusetts Institute of Technology (June 2016)
  2. Zhang, Lemoine et al. 2018, “Mitigating Unwanted Biases with Adversarial Learning”, AIES (Feb 2018)
  3. Saleiro, Kuester et al. 2019, “Aequitas: A Bias and Fairness Audit Toolkit”, arXiv (Apr 2019)
  4. Bellamy, Dey et al. 2018, “AI Fairness 360: An Extensible Toolkit for detecting, understanding, and mitigating unwanted algorithmic bias”, arXiv (Oct 2018)
  5. Pleiss, Raghavan 2017, "On Fairness and Calibration", arXiv (Nov 2017)
  6. Zemel, Wu et al. 2013, “Learning Fair Representations", PMLR (2013)
  7. Feldman, Friedler et al. 2015, “Certifying and Removing Disparate Impact", International Conference on Knowledge Discovery and Data Mining (August 2015)
  8. Hardt, Price et al. 2016, “Equality of Opportunity in Supervised Learning", arXiv (Oct 2016)
  9. Zhang, Zhou 2019, “Fairness Assessment for Artificial Intelligence in Financial Industry", arXiv (Dec 2019)
  10. Friedler, Scheidegger et al. 2018, “A comparative study of fairness-enhancing interventions in machine learning”, arXiv (Feb 2018)
  11. Tramer, Atlidakis et al. 2017, "FairTest: discovering unwarranted associations in data-driven applications", IEEE European Symposium on Security and Privacy (2017)
  12. Sara Hajian et al. 2013, "A Methodology for Direct and Indirect Discrimination Prevention in Data Mining"
  13. Calmon, Wei et al. 2017, “Optimized Pre-Processing for Discrimination Prevention”, NIPS (Dec 2017)
  14. Kamishima, Akaho et al. 2012, “Fairness-Aware Classifier with Prejudice Remover Regularizer”, Springer-Verlag Berlin Heidelberg (2012)
  15. Kamiran, Karim et al. 2012, “Decision Theory for Discrimination-Aware Classification”, IEEE 12th International Conference on Data Mining (2012)
  16. Kamiran & Calders 2011, “Data preprocessing techniques for classification without discrimination”, Springerlink.com (2011)
  17. Mehrabi, Morstatter et al. (2019), “A Survey on Bias and Fairness in Machine Learning”, arXiv (Sep 2019)
  18. Galhotra, Brun et al. (2017), “Fairness Testing: Testing Software for Discrimination”, ESEC/FSE (2017)
  19. Bantilan 2017, “Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation”, arXiv (Oct 2017)
  20. Wachter, Mittelstadt et al. 2020, “Why Fairness cannot be automated”, arXiv (2020)

Downloads

Published

2019-05-01

Issue

Section

Research Articles

How to Cite

[1]
Ashish Garg, Dr. Rajesh SL, " PCIV method for Indirect Bias Quantification in AI and ML Models, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 5, Issue 3, pp.687-693, May-June-2019. Available at doi : https://doi.org/10.32628/CSEIT217251