AI Transparency and Explainability
As AI systems are used for solving more and more complex tasks, experts have pointed out the ethical issues related to incomprehensible black box models.
Anyone who is subjected to AI-automated or AI-assisted decision-making should have enough information to be able to challenge the result. For this to be possible, the data, the AI systems, and the AI business models need to be opened to a relevant extent.
However, the call for transparency is perhaps not best answered with publishing pages of code, as there is a limit to how much information people can and want to process. In AIGA, we are exploring the trade-offs between transparency and information overload.
More information is not always better, and quality is more important than quantity. Thus, we are also studying how the amount of detail could be adjusted to provide tailored explanations to different target audiences.