responsible AI

Using speculative design to shape preferable futures of AI in society and business

The AI services and products are developed to answer our needs today – or at least in the near future. As the years go by, the needs will change and the technology might be used very differently to what was initially thought. In this blog post, we argue that responsible AI development also involves doing our best to imagine such unexpected uses. It is important that we explore, critique, and discuss the way today’s technologies might shape the future.

Using speculative design to shape preferable futures of AI in society and business Read More »

How to explain artificial intelligence?

Algorithmic decision-making is increasing rapidly across industries as well as in public services. By default, AI systems such as machine learning or deep learning produce outputs with no explanation or context. As the predicted outcomes turn into recommendations, decisions or direct actions, humans tend to look for justification. Explainable AI (XAI) provides cues to how and why the decision was made, helping humans to understand and interact with the AI system.

How to explain artificial intelligence? Read More »