This post features two peer-reviewed articles about defining organizational AI governance and exploring the role of responsible AI in ESG investing. The recently published articles are authored by the AIGA research team from the University of Turku.
If you are the slightest bit aware of ethical AI or AI governance, you’ve probably heard about principles such as transparency, explainability, fairness, non-maleficence, accountability or privacy. It is easy to agree with these principles – the real question is how should we translate them into meaningful actions?
The AI services and products are developed to answer our needs today – or at least in the near future. As the years go by, the needs will change and the technology might be used very differently to what was initially thought. In this blog post, we argue that responsible AI development also involves doing our best to imagine such unexpected uses. It is important that we explore, critique, and discuss the way today’s technologies might shape the future.
Algorithmic decision-making is increasing rapidly across industries as well as in public services. By default, AI systems such as machine learning or deep learning produce outputs with no explanation or context. As the predicted outcomes turn into recommendations, decisions or direct actions, humans tend to look for justification. Explainable AI (XAI) provides cues to how and why the decision was made, helping humans to understand and interact with the AI system.