responsible AI

A team working towards a goal

From ethical principles to governed AI

If you are the slightest bit aware of ethical AI or AI governance, you’ve probably heard about principles such as transparency, explainability, fairness, non-maleficence, accountability or privacy. It is easy to agree with these principles – the real question is how should we translate them into meaningful actions?

Using speculative design to shape preferable futures of AI in society and business

The AI services and products are developed to answer our needs today – or at least in the near future. As the years go by, the needs will change and the technology might be used very differently to what was initially thought. In this blog post, we argue that responsible AI development also involves doing our best to imagine such unexpected uses. It is important that we explore, critique, and discuss the way today’s technologies might shape the future.

AIGA logo with text "We are hiring!"

Open vacancy in the AIGA project

The AIGA team at the University of Turku is hiring a project researcher or a research assistant for the remaining project period (until August 2022).

How to explain artificial intelligence?

Algorithmic decision-making is increasing rapidly across industries as well as in public services. By default, AI systems such as machine learning or deep learning produce outputs with no explanation or context. As the predicted outcomes turn into recommendations, decisions or direct actions, humans tend to look for justification. Explainable AI (XAI) provides cues to how and why the decision was made, helping humans to understand and interact with the AI system.