The very reason we use AI is to deal with very complex problems – problems one cannot adequately solve with traditional computer programs. Should you trust an AI algorithm, when you cannot even explain how it works?
Algorithmic decision-making is increasing rapidly across industries as well as in public services. By default, AI systems such as machine learning or deep learning produce outputs with no explanation or context. As the predicted outcomes turn into recommendations, decisions or direct actions, humans tend to look for justification. Explainable AI (XAI) provides cues to how and why the decision was made, helping humans to understand and interact with the AI system.