Innovations in Explainable AI : Bridging the Gap Between Complexity and Understanding
DOI:
https://doi.org/10.32628/CSEIT2390613Keywords:
Explainable AI, XAI, Interpretable Models, Natural Language Explanations, Model-Agnostic TechniquesAbstract
The integration of Artificial Intelligence (AI) into various domains has witnessed remarkable advancements, yet the opacity of complex AI models poses challenges for widespread acceptance and application. This research paper delves into the field of Explainable AI (XAI) and explores innovative strategies aimed at bridging the gap between the intricacies of advanced AI algorithms and the imperative for human comprehension. We investigate key developments, including interpretable model architectures, local and visual explanation techniques, natural language explanations, and model-agnostic approaches. Emphasis is placed on ethical considerations to ensure transparency and fairness in algorithmic decision-making. By surveying and analyzing these innovations, this research contributes to the ongoing discourse on making AI systems more accessible, accountable, and trustworthy, ultimately fostering a harmonious collaboration between humans and intelligent machines in an increasingly AI-driven world.
References
- Doshi-Velez, F., & Kim, B. , “Towards a rigorous science of interpretable machine learning”, 2017
- Lundberg, S. M., & Lee, S. I. (2017). "A unified approach to interpreting model predictions. "In Advances in neural information processing systems (pp. 4765-4774).
- Mittelstadt, B., Russell, C., & Wachter, S. (2019). "Explaining explanations in AI." In Proceedings of the conference on fairness, accountability, and transparency (pp. 279-288).
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). "Anchors: High-precision model-agnostic explanations." In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1).
- Samek, W., Wiegand, T., & Müller, K. R. (2017). "Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models." ITU Journal: ICT Discoveries, 1(1), 1-16.
- Chen, J., Song, L., Wainwright, M. J., & Jordan, M. I. (2018). Learning to explain: An information-theoretic perspective on model interpretation. In Proceedings of the 35th International Conference on Machine Learning (Vol. 80, pp. 883-892).
- Gartner. (2017). Top 10 Strategic Technology Trends for 2018. Accessed: Jun. 6, 2018. [Online]. Available: https://www.gartner.com/doc/3811368?srcId=1- 6595640781
- Ghorbani A, Abubakar A., Zou J. (2019), Interpretation of neural networks is fragile, Proceedings of the AAAI Conference on Artificial Intelligence, 2019
- A. Chander et al., in Proc. MAKE-Explainable AI, 2018.
Downloads
Published
Issue
Section
License
Copyright (c) IJSRCSEIT

This work is licensed under a Creative Commons Attribution 4.0 International License.