There is no responsible AI without governance

Written by: Matti Mäntymäki, Associate Professor, Turku School of Economics


The AI field needs to move beyond ethical guidelines

In Spring 2019, a student asked me about companies providing products or services for ethical AI. I couldn’t come up with anything by heart, and even a good old Google search let me down. Despite the numerous scientific articles and ethical AI guidelines published, practice-oriented solutions proved hard to find.

I was facing the dilemma of principle-based AI ethics. There is a missing link from ethical principles to how AI systems are developed and operated in practice. The EU-level expert group has also acknowledged the issue, and responded by producing an Assessment List for Trustworthy AI to accompany its Ethics Guidelines for Trustworthy AI.


Businesses are looking for hands-on advice to operate responsibly

Shortly after, I met with Teemu Birkstedt, a data and analytics expert, entrepreneur, and a newly appointed Professor of Practice. He introduced me to Saidot, a start-up company that was doing hands-on work to promote ethical AI.

Most experts in the field already agree that AI systems should be less ambiguous to the end-users and subjects of algorithmic decision-making. Key measures like explainability and transparency keep popping up in the public discussion, but often detached from the practical level.

Many businesses that are willing to take steps towards more ethical AI find it hard to identify tangible actions. The start-up knew this and was helping other businesses to answer the all-important question: How to put responsible AI into practice?


AI Governance – not just another buzzword

Our discussions with Teemu quickly lead to a shared conclusion: AI governance is perhaps the most important means to make ethical AI into reality.

Regulations such as GDPR already apply to algorithmic systems, but also specific regulations –particularly for AI – are being discussed in the EU. AI governance is a tool that mediates these regulations and ethical guidelines into everyday actions in organizations.

We saw the opportunity to work with pioneering companies and, with the help of colleagues, the outline for project AIGA was created. We introduced the idea to Outi Keski-Äijö, the Head of Business Finland’s AI Business program, whose insights and guidance proved invaluable when developing the project further. For AIGA to be very practice-oriented and focused on the real challenges in machine learning and transparency, we reached out to Tommi Mikkonen, Professor of Software Engineering at the University of Helsinki, and asked if his team would join the initiative.


AIGA leads the way

In AIGA, we aim to integrate AI governance into the overall corporate governance fabric and set the ground for AI auditing processes. Taking the right steps is important but documenting and communicating those steps is equally so.

The public is increasingly aware of the risks associated with decisions taken by algorithms. At the same time, the AI field is striving to meet the current and potential future regulatory requirements. Against this backdrop, it is evident that a significant European market for responsible AI will emerge soon.

As the market demand for commercializing AI explainability and transparency booms, Finland can capitalize on this momentum by pursuing a business ecosystem focusing on AI governance and auditing.  We are grateful that our main funder, Business Finland, shared our vision and saw how AIGA could be in the front line opening this business opportunity.

Now the AIGA project has been up and running for more than six months. We are even more confident that the themes we are working on are highly relevant for both academic research and businesses, but also important from a societal standpoint.

This project introduction is an opening for a series of blog texts from the AIGA consortium partners. More posts will follow in the upcoming months. Stay tuned!

Leave a Comment