How to explain artificial intelligence?

Algorithmic decision-making is increasing rapidly across industries as well as in public services. By default, AI systems such as machine learning or deep learning produce outputs with no explanation or context. As the predicted outcomes turn into recommendations, decisions or direct actions, humans tend to look for justification. Explainable AI (XAI) provides cues to how and why the decision was made, helping humans to understand and interact with the AI system.

Banner image: data file with personal information

Yes, privacy is worth the effort. Here’s why

When we advocate for privacy, we tend to concentrate on the negative consequences of privacy violations [56; 32; 19; 50]. These portrayals are extremely important, but they paint only one half of the picture. Privacy also brings about net-positive advantages for individuals and organizations. These advantages can act as powerful internal incentives, driving privacy adoption. A key addition to external incentives like regulation and public pressure.

A regulatory compliance stamp with blue lights in the background

EU published an AI regulation proposal. Should we worry?

In late April, the European Commission laid out its vision of how AI technologies should be governed. The proposal meshes AI regulation with product safety rules, controls how AI systems should be developed and technically composed, introduces market supervision arrangements and gives the authorities wide-ranging powers to impose potentially crippling sanctions.

At first glance, the proposal seems quite a mouthful. It is complex, appears relatively heavy-handed and imposes a host of stringent requirements on developers. But is there really a reason to be worried?

A social network overlayed on top of a photo of judge's hammer

Authority is increasingly expressed algorithmically

AI is inconspicuously present in our everyday, guiding our engagement with the surrounding world. We should be talking more about how authority is embedded in these systems and how these systems affect us.

String light with blue light bulbs

AI Governance Bulletin 15/2021

The bulletin features the latest news, events and initiatives linking to AI governance. We post weekly highlights to showcase the range of activities around AIGA’s core themes: explainability, transparency, system design and commercialization of responsible AI.

String light with blue light bulbs

AI Governance Bulletin 12/2021

The bulletin features the latest news, events and initiatives linking to AI governance. We post weekly highlights to showcase the range of activities around AIGA’s core themes: explainability, transparency, system design and commercialization of responsible AI.

Author portrait

Miten rakennetaan AI-valmis organisaatio ja miksi pian on pakko?

Jokaisen kasvuhaluisen yrityksen strategiassa on pian aikataulu, milloin organisaatio on AI-valmis – milloin yritys katsoo olevansa valmis tekoälyn laajaan käyttöön. Koska käsitteen merkitys tuntuu edelleen vaikealta hahmottaa, niin yritämme tässä kertoa, mitä AI-valmius meidän mielestämme tarkoittaa.

String light with blue light bulbs

AI Governance Bulletin 9/2021

The bulletin features the latest news, events and initiatives linking to AI governance. We post weekly highlights to showcase the range of activities around AIGA’s core themes: explainability, transparency, system design and commercialization of responsible AI.

There is no responsible AI without governance

In Spring 2019, a student asked me about companies providing products or services for ethical AI. I couldn’t come up with anything by heart, and even a good old Google search let me down. Despite the numerous scientific articles and ethical AI guidelines published, practice-oriented solutions proved hard to find.