A Literature Review and Research Agenda on Explainable Artificial Intelligence (XAI)
Main Article Content
Abstract
Purpose: When Artificial Intelligence is penetrating every walk of our affairs and business, we face enormous challenges and opportunities to adopt this revolution. Machine learning models are used to make the important decisions in critical areas such as medical diagnosis, financial transactions. We need to know how they make decisions to trust the systems powered by these models. However, there are challenges in this area of explaining predictions or decisions made by machine learning model. Ensembles like Random Forest, Deep learning algorithms make the matter worst in terms of explaining the outcomes of decision even though these models produce more accurate results. We cannot accept the black box nature of AI models as we encounter the consequences of those decisions. In this paper, we would like to open this Pandora box and review the current challenges and opportunities to explain the decisions or outcome of AI model. There has been lot of debate on this topic with headlines as Explainable Artificial Intelligence (XAI), Interpreting ML models, Explainable ML models etc. This paper does the literature review of latest findings and surveys published in various reputed journals and publications. Towards the end, we try to bring some open research agenda in these findings and future directions.
Methodology: The literature survey on the chosen topic has been exhaustively covered to include fundamental concepts of the research topic. Journals from multiple secondary data sources such as books and research papers published in various reputable publications which are relevant for the work were chosen in the methodology.
Findings/Result: While there are no single approaches currently solve the explainable ML model challenges, some model algorithms such as Decision Trees, KNN algorithm provides built in interpretations. However there is no common approach and they cannot be used in all the problems. Developing model specific interpretations will be complex and difficult for the user to make them adopt. Model specific explanations may lead to multiple explanations on same predictions which will lead to ambiguity of the outcome. In this paper, we have conceptualized a common approach to build explainable models that may fulfill current challenges of XAI.
Originality: After the literature review, the knowledge gathered in the form of findings were used to model a theoretical framework for the research topic. Then concerted effort was made to develop a conceptual model to support the future research work.
Paper Type: Literature Review.