Expectations shape future ecosystems for responsible AI

A pile of EU strategy documents leading to alternative paths towards responsible AI

Written by: Matti Minkkinen, University of Turku


Responsible AI grows from the co-operation between regulators, technology developers and AI user organizations

For years, the American hiring company HireVue had used a controversial AI application to analyze candidates’ facial features and movements during job interviews. In January 2021, the company had undergone an independent audit that proved its algorithms to be unbiased, or so they claimed. The case received public attention when critics argued that the hiring company had misrepresented the audit results.

Were the job candidates assessed fairly by the algorithms? Who should have ensured that the auditing itself was unbiased? The algorithmic auditing industry is emerging and questions like these reveal its complex nature.

Organizations must respond to the challenge of governing artificial intelligence and ensuring its socially responsible development. Actions are also needed in industries, governments and at the global level. Responsible AI can be achieved through collaborative networks, but ecosystems around responsible AI are yet to emerge.


Expectations guide our actions

Even if the real ecosystems are just emerging, they already exist at the level of expectations. The early visions – or blueprints, if you like – are often expressed in strategies and policy papers that shape agendas and funding decisions. In other words, expectations have real-world effects.


Expectations are our images of the future and ideas about how our actions influence the future

The United States and China dominate the global AI development, but the European Union aspires to be a key player in responsible AI. In addition to Europe’s competitiveness, the European approach emphasizes societal, ethical and legal considerations that directly link to governance and responsibility issues.

Since the Declaration of Cooperation on Artificial Intelligence in 2018, the EU has produced several AI strategy documents which define the agenda on AI research and policy in Europe and beyond. In our study, we examined five of these strategy documents and reconstructed the EU’s storyline on the European responsible AI ecosystem.

Understanding the EU’s vision for responsible AI helps technology developers and AI user organizations to decide what to do, and what not to do, in response to these expectations. It helps organizations to anticipate their future operating environment and to consider their place in the developing European responsible AI vision: as forerunners, experimenters, moderate conformists or something else.



The foundations, drivers and future of responsible AI in the EU

We identified four key themes that offer a storyline of European responsible AI.

  1. Trust and trustworthiness as the foundation of responsible AI. Trust is seen as a prerequisite for increased AI adoption, and trust in turn requires clear regulation, auditing and explainability in AI systems.
  2. Ethics and competitiveness support each other. Therefore, ethical AI is seen as a “win-win proposition”. In the EU documents, strong ethical values are argued to create an appealing brand of trustworthy AI for European businesses.
  3. Need for a common European approach. This avoids fragmentation and provides a shared vision to promote responsible AI in Europe. European values of fundamental rights and human dignity are seen to be the foundation of this vision.
  4. The storyline portrays Europe as a potential global leader in responsible AI. Global arenas like OECD, UNESCO and the WTO are important, but Europe leads the way, building alliances around shared values. In this vision, Europe acts as the ecosystem leader and sets the global standards on responsible AI development and use.


The future of responsible AI is made in the present

We offer a simplified map to the “sea of expectations” coming from regulators. This map helps to prioritize and respond to these expectations. For example, technology providers may give more weight to trust, when trust and AI acceptance are seen as pieces in a larger visionary project of ecosystem-building. AI consultants, in turn, can find ways of explaining the responsible AI “win-win proposition” to clients.

The key finding of the study is that the EU expectations on responsible AI ecosystems have three layers going from beliefs to value-based network-building. Beliefs on trust and the potential of AI provide a shared basis for action, which is taken further in the European value-based vision. In the final layer, the European focus expands into a global network where Europe acts as a leader on responsible AI.

Our study cannot answer the question of whether the European project for responsible AI will be successful, or whether European leadership on responsible AI is credible or even desirable. However, we provide tools for understanding expectations on emerging ecosystems. We also offer starting points for future research into responsible AI ecosystems, such as studying responsible AI expectations comparatively in different regions (North America, Asia), and studying new business models in responsible AI.

Future directions of responsible AI ecosystems are made in the present, in expectations and actions. Fostering a viable ecosystem for responsible AI is a crucial question in the coming years from both economic and ethical perspectives.



Interested in learning more? Watch the full conference presentation:

Leave a Comment