AI Transparency and Explainability
As AI systems are used for solving more and more complex tasks, experts have pointed out the ethical issues related to incomprehensible black box models.
Anyone who is subjected to AI-automated or AI-assisted decision-making should have enough information to be able to challenge the result. For this to be possible, the data, the AI systems, and the AI business models need to be opened to a relevant extent.
However, the call for transparency is perhaps not best answered with publishing pages of code, as there is a limit to how much information people can and want to process. In AIGA, we are exploring the trade-offs between transparency and information overload.
- For us, transparency means three things:
- Traceability
- Data, processes and business models affecting the AI system must be documented to the best possible standard. Same applies to the decisions made by the AI system.
- Explainability
- Technical explainability requires that the decisions made by an AI system can be understood and traced by humans.
- Communication
- Consumers and citizens have a right to be informed that they are interacting with an AI system. Thus, AI systems must be identifiable as such. AI system’s capabilities, such as accuracy, and limitations should be communicated to end-users in appropriate manner.
More information is not always better, and quality is more important than quantity. Thus, we are also studying how the amount of detail could be adjusted to provide tailored explanations to different target audiences.
The research approaches include:
- Experimental research
- Expert interviews