Mitigating Bias in AI-Driven Recruitment : The Role of Explainable Machine Learning (XAI)

Authors

  • Ravi Kiran Magham Osmania University, India Author

DOI:

https://doi.org/10.32628/CSEIT241051037

Keywords:

Explainable AI, Recruitment Bias, Algorithmic Fairness, Machine Learning Interpretability, AI Ethics

Abstract

This article explores the critical role of Explainable Artificial Intelligence (XAI) in mitigating bias within AI-driven recruitment processes. As AI becomes increasingly prevalent in hiring practices, concerns about algorithmic bias and fairness have emerged. The article discusses how XAI techniques, such as SHAP and LIME, can be used to detect and interpret potential biases in recruitment algorithms. It examines the implementation of XAI for feature importance analysis, algorithmic bias detection, and disparate impact analysis across different demographic groups. The article addresses the challenges of balancing model complexity with explainability and the limitations of XAI in identifying systemic biases. By implementing XAI strategies, organizations can enhance the fairness and transparency of their hiring practices, ultimately fostering more diverse and equitable workplaces.

Downloads

Download data is not yet available.

References

Pew Research Center, "Artificial Intelligence and the Future of Humans," 2018. [Online]. Available: https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/

A. Adadi and M. Berrada, "Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)," IEEE Access, vol. 6, pp. 52138-52160, 2018. [Online]. Available: https://ieeexplore.ieee.org/document/8466590 DOI: https://doi.org/10.1109/ACCESS.2018.2870052

M. Raghavan, S. Barocas, J. Kleinberg, and K. Levy, "Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices," in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 469–481. [Online]. Available: https://dl.acm.org/doi/10.1145/3351095.3372828 DOI: https://doi.org/10.1145/3351095.3372828

S. M. Lundberg and S. I. Lee, "A Unified Approach to Interpreting Model Predictions," in Advances in Neural Information Processing Systems, 2017, pp. 4765-4774. [Online]. Available: https://proceedings.neurips.cc/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf

M. T. Ribeiro, S. Singh, and C. Guestrin, "Why Should I Trust You?: Explaining the Predictions of Any Classifier," in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135-1144. [Online]. Available: https://dl.acm.org/doi/10.1145/2939672.2939778 DOI: https://doi.org/10.1145/2939672.2939778

J. Dastin, "Amazon scraps secret AI recruiting tool that showed bias against women," Reuters, Oct. 10, 2018. [Online]. Available: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

S. A. Friedler, C. Scheidegger, S. Venkatasubramanian, S. Choudhary, E. P. Hamilton, and D. Roth, "A comparative study of fairness-enhancing interventions in machine learning," in Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, pp. 329-338. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287589 DOI: https://doi.org/10.1145/3287560.3287589

J. Dodge, Q. V. Liao, Y. Zhang, R. K. E. Bellamy, and C. Dugan, "Explaining models: An empirical study of how explanations impact fairness judgment," in Proceedings of the 24th International Conference on Intelligent User Interfaces, 2019, pp. 275-285. [Online]. Available: https://dl.acm.org/doi/10.1145/3301275.3302310 DOI: https://doi.org/10.1145/3301275.3302310

R. Binns, M. Van Kleek, M. Veale, U. Lyngs, J. Zhao, and N. Shadbolt, "'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions," in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1-14. [Online]. Available: https://dl.acm.org/doi/10.1145/3173574.3173951 DOI: https://doi.org/10.1145/3173574.3173951

S. M. Lundberg, G. Erion, H. Chen, A. DeGrave, J. M. Prutkin, B. Nair, R. Katz, J. Himmelfarb, N. Bansal, and S.-I. Lee, "From local explanations to global understanding with explainable AI for trees," Nature Machine Intelligence, vol. 2, no. 1, pp. 56-67, 2020. [Online]. Available: https://www.nature.com/articles/s42256-019-0138-9 DOI: https://doi.org/10.1038/s42256-019-0138-9

A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in Neural Information Processing Systems, 2017, pp. 5998-6008. [Online]. Available: https://papers.nips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html

M. Tarafdar, C. M. Beath, and J. W. Ross, "Using AI to Enhance Business Operations," MIT Sloan Management Review, vol. 60, no. 4, pp. 37-44, 2019. [Online]. Available: https://sloanreview.mit.edu/article/using-ai-to-enhance-business-operations/

U. Bhatt, A. Xiang, S. Sharma, A. Weller, A. Taly, Y. Jia, J. Ghosh, R. Puri, J. M. F. Moura, and P. Eckersley, "Explainable machine learning in deployment," in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 648-657. [Online]. Available: https://dl.acm.org/doi/10.1145/3351095.3375624 DOI: https://doi.org/10.1145/3351095.3375624

N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, "A survey on bias and fairness in machine learning," ACM Computing Surveys, vol. 54, no. 6, pp. 1-35, 2021. [Online]. Available: https://dl.acm.org/doi/10.1145/3457607 DOI: https://doi.org/10.1145/3457607

S. Barocas, M. Hardt, and A. Narayanan, "Fairness and Machine Learning: Limitations and Opportunities," 2019. [Online]. Available: https://fairmlbook.org/

Downloads

Published

01-11-2024

Issue

Section

Research Articles

How to Cite

[1]
Ravi Kiran Magham, “Mitigating Bias in AI-Driven Recruitment : The Role of Explainable Machine Learning (XAI)”, Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol, vol. 10, no. 5, pp. 461–469, Nov. 2024, doi: 10.32628/CSEIT241051037.

Similar Articles

1-10 of 271

You may also start an advanced similarity search for this article.