AI

How to explain artificial intelligence?

Algorithmic decision-making is increasing rapidly across industries as well as in public services. By default, AI systems such as machine learning or deep learning produce outputs with no explanation or context. As the predicted outcomes turn into recommendations, decisions or direct actions, humans tend to look for justification. Explainable AI (XAI) provides cues to how and why the decision was made, helping humans to understand and interact with the AI system.

A regulatory compliance stamp with blue lights in the background

EU published an AI regulation proposal. Should we worry?

In late April, the European Commission laid out its vision of how AI technologies should be governed. The proposal meshes AI regulation with product safety rules, controls how AI systems should be developed and technically composed, introduces market supervision arrangements and gives the authorities wide-ranging powers to impose potentially crippling sanctions.

At first glance, the proposal seems quite a mouthful. It is complex, appears relatively heavy-handed and imposes a host of stringent requirements on developers. But is there really a reason to be worried?