AIGA brings together researchers and business experts who co-innovate for more responsible AI. Combining multiple approaches, AIGA seeks to provide solutions for:
Good AI governance aligns with data governance and corporate governance to support efficient workflows and risk management.
In AIGA, the jointly developed AI governance framework is a tool to ensure the AI systems are built and used strategically. In addition, the governance framework serves as a starting point for evaluating the state of AI governance and developing AI auditing protocols.
New revenue can be created through responsibility certificates, expanding service portfolios or sustainability branding.
AIGA focuses on practice-oriented solutions.
One of our goals is increasing our understanding of the end-user experiences in algorithmic decision-making. We then link these observations to the choices made when developing an AI system. Additionally, AIGA involves designing and testing an executable MLOps pipeline.
The regulations and ethical guidelines are the major drivers for responsible AI. In AIGA, they provide the background for all other project activities.
As the legislative framework on the use of AI continues to develop, the tools for AI governance and auditing help companies and organizations respond to the changing operational landscape.
AIGA relies on and builds on state-of-the-art academic research.
The project applies the three cycles of Design Science Research: understanding the use environment, developing the deliverables through iterative testing and expanding the current knowledge base.
By combining literature reviews, expert interviews and relevant case studies from the industry, AIGA seeks to maximize its impact to the field of responsible AI.
Consumers and citizens
Our aim is to increase AI awareness among consumers and citizens. We do this by communicating the main themes of the project: how AI systems can be made fair, transparent and trustworthy.
The key messages we wish to deliver are as follows:
- Consumers and citizens have a right to know when they are being subjected to automated decision-making
- AI-automated and AI-assisted decision-making can be applied responsibly.
- The decisions taken by algorithms can be explained, evaluated and challenged.