A pile of EU strategy documents leading to alternative paths towards responsible AI

Expectations shape future ecosystems for responsible AI

For years, the American hiring company HireVue had used a controversial AI application to analyze candidates’ facial features and movements during job interviews. In January 2021, the company had undergone an independent audit that proved its algorithms to be unbiased, or so they claimed. The case received public attention when critics argued that the hiring company had misrepresented the audit results.

Were the job candidates assessed fairly by the algorithms? Who should have ensured that the auditing itself was unbiased? The algorithmic auditing industry is emerging and questions like these reveal its complex nature.

Text "Project AIGA presents: Open Seminar, Turku 11.11.2021", with the AIGA and Business Finland logos.

AIGA organizes a seminar and networking event in Turku 11.11.2021

The Artificial Intelligence Governance and Auditing (AIGA) project invites you to a live seminar and networking event on November 11. The seminar speakers represent the major Finnish AI initiatives. After the seminar, there is a chance to network while enjoying coffee and snacks. The seminar is open for all, but requires registration.

Using speculative design to shape preferable futures of AI in society and business

The AI services and products are developed to answer our needs today – or at least in the near future. As the years go by, the needs will change and the technology might be used very differently to what was initially thought. In this blog post, we argue that responsible AI development also involves doing our best to imagine such unexpected uses. It is important that we explore, critique, and discuss the way today’s technologies might shape the future.

Is it free or do you pay with your personal data? – A new balance for the personal data market

Fair use data is one of the key elements of responsible AI. We shouldn’t only care about the quality of the data, but also how it was retrieved (mind you, often there are important connections between the two). In the digital economy, personal data is currency. Platforms like Facebook or Snapchat appear free but, as we are finally becoming aware, they are not. Are we, as users, paying too high of a prize for these services? In this blog post, we wish to show that re-shifting the flows of personal data is possible.

AIGA logo with text "We are hiring!"

Open vacancy in the AIGA project

The AIGA team at the University of Turku is hiring a project researcher or a research assistant for the remaining project period (until August 2022).

How to explain artificial intelligence?

Algorithmic decision-making is increasing rapidly across industries as well as in public services. By default, AI systems such as machine learning or deep learning produce outputs with no explanation or context. As the predicted outcomes turn into recommendations, decisions or direct actions, humans tend to look for justification. Explainable AI (XAI) provides cues to how and why the decision was made, helping humans to understand and interact with the AI system.

Banner image: data file with personal information

Yes, privacy is worth the effort. Here’s why

When we advocate for privacy, we tend to concentrate on the negative consequences of privacy violations [56; 32; 19; 50]. These portrayals are extremely important, but they paint only one half of the picture. Privacy also brings about net-positive advantages for individuals and organizations. These advantages can act as powerful internal incentives, driving privacy adoption. A key addition to external incentives like regulation and public pressure.

A regulatory compliance stamp with blue lights in the background

EU published an AI regulation proposal. Should we worry?

In late April, the European Commission laid out its vision of how AI technologies should be governed. The proposal meshes AI regulation with product safety rules, controls how AI systems should be developed and technically composed, introduces market supervision arrangements and gives the authorities wide-ranging powers to impose potentially crippling sanctions.

At first glance, the proposal seems quite a mouthful. It is complex, appears relatively heavy-handed and imposes a host of stringent requirements on developers. But is there really a reason to be worried?

A social network overlayed on top of a photo of judge's hammer

Authority is increasingly expressed algorithmically

AI is inconspicuously present in our everyday, guiding our engagement with the surrounding world. We should be talking more about how authority is embedded in these systems and how these systems affect us.

String light with blue light bulbs

AI Governance Bulletin 15/2021

The bulletin features the latest news, events and initiatives linking to AI governance. We post weekly highlights to showcase the range of activities around AIGA’s core themes: explainability, transparency, system design and commercialization of responsible AI.