EU published an AI regulation proposal. Should we worry?

A regulatory compliance stamp with blue lights in the background

Written by: Mika Viljanen, Associate professor (Laws), University of Turku


A complex blueprint for future European AI regulation

In late April, the European Commission laid out its vision of how AI technologies should be governed. The proposal meshes AI regulation with product safety rules, controls how AI systems should be developed and technically composed, introduces market supervision arrangements and gives the authorities wide-ranging powers to impose potentially crippling sanctions.

At first glance, the proposal seems quite a mouthful. It is complex, appears relatively heavy-handed and imposes a host of stringent requirements on developers. But is there really a reason to be worried?


The proposal targets high-risk AI systems

Initially, the scope of the proposal appears excessive. Together, Article 3(1) and Annex I put in place an expansive definition of artificial intelligence. The regulation seems designed to cover any and all kinds of software-infused digital systems. The idea is quite evidently to cast a web that nothing relevant can escape.

Despite its expansive understanding of AI, the scope is cut by the risk-based regulatory approach the Commission adopted. Even if most digital technologies would qualify as AI systems, the regulation only applies to high-risk AI systems. Other systems would be subject to light-touch voluntary Code of Conducts.

The proposal is extremely complex in defining what AI systems are high-risk AI systems. Four distinct categories of high-risk AI systems emerge. The common feature among the categories is that they all may give rise to grave danger to life and health, property or the protection of fundamental rights.

BOX1: THE FOUR HIGH-RISK AI SYSTEM CATEGORIES

Category 1:
Prohibited AI systems listed in Article 5.
– Manipulative AI systems that cause significant physical or psychological harm (Art 5(1)(a)).
– Exploitative AI systems that cause significant physical or psychological harm (Art 5(1)(b)).
– Social scoring used disproportionally (Art 5(1)(c)).
– Certain biometric recognition systems used for law enforcement (Art 5(1)(d)).

Category 2:
Safety-critical AI systems in products (e.g. machinery, toys, leisure craft) subject to product safety regulation and listed in Annex II.

Category 3:
Safety-critical AI systems in products (e.g. vehicles, aircraft, ships) listed in Art. 2(2).

Category 4:
AI systems listed in Annex III, e.g.
– Biometric recognition systems.
– AI systems used to determine access to and for evaluation in education.
– AI systems used to make recruitment decisions and other employment-related decisions.
– AI systems used in law enforcement.
– AI systems used by the judiciary.


The regulatory approach is relatively soft

If the relatively limited substantive scope of the regulation already soothed nerves, the regulatory approach the Commission has adopted provides further solace. The regulatory approach consists of six layers: prohibitions, management based regulation, technology regulation, ex ante conformity assessment, ex post controls and sanctions.

A considerable portion of the rules do not impose constraints on what kind of systems can be developed and deployed. They seek to affect the processes in which the systems are developed. The approach is, consequently relatively soft. This Regulation will not dictate what AI systems can be used for or what their features should be. To comply, providers simply need to build the management systems. Compliance with the management based standards will be a significant bureaucratic effort, but once implemented, the processes will likely not impinge on developers’ freedom of action drastically.

BOX2: THE SIX LAYERS OF THE REGULATION

1. Prohibitions
The proposal would only ban a limited roster of AI systems. Prohibited AI systems include systems that cause physical or psychological harm and seek to manipulate individuals by using subliminal techniques or exploiting their vulnerabilities, social scoring systems and law enforcement biometric identification systems.

2. Management based regulation
Management based regulation constitutes the core of the proposal. The regulation, if adopted, would, for example, require that AI system providers combine AI development with quality control, risk management and data governance systems. This extensive and detailed ruleset is likely to have a pervasive impact on AI development practices, even though the management based regulation is by and large technology-agnostic. The objective is to ensure that providers know what they are doing, document their choices, and think hard about whether the choices they make are appropriate. Once the processes are in place, providers should not make undesirable design choices because they are either ignorant or unorganized.

While mostly procedural, the risk management requirements, in particular, will likely have important implications on designs. The rules require that developers minimize risks to health and safety and fundamental rights. To comply, an organization must produce information on the risks and, then, devise approaches and technologies to “eliminate and reduce” and “mitigate and control residual risks”. While no designs are imposed on developers, the processes are rigged to produce health and safety and human rights “friendly” AI systems.

3. Technology regulation
The proposal also includes binding technology regulation rules that force developers to adopt particular technical designs. The first technology regulation layer is made up of the high-level technical rules contained in Chapter 2 of the regulation. These rules pertain primarily to transparency, human control, event logs, accuracy, robustness, and cybersecurity, but only offer a vague roadmap for developers to follow.

The proposal envisions that more detailed AI systems rules are articulated in binding standards or common specifications. The standards will be drawn up in the customary processes by standardization organizations and will become binding once endorsed by Union authorities. Where standards do not exist and are unlikely to emerge, the Commission may issue Union common specifications to provide detailed guidance. As a result, a dense patchwork of detailed technical rules should emerge in due time. Importantly, the Regulation framework will also guide Union regulatory action in areas listed in Art 2(2) where the Regulation will not directly apply such as autonomous mobility applications.

4. Ex ante conformity assessment
Rules on ex ante conformity assessment constitute the fourth layer. The playbook is copied from product safety regulation. High-risk AI systems can only be placed on the market, made available, or put into use if they have undergone a conformity assessment and have a CE marking affixed to them.

At first, it seems like the proposal would set up a bureaucratic nightmare structure to implement conformity assessment. In it, national special purpose notification authorities would inspect and certify notified bodies. The notified bodies then would be tasked with assessing the conformity of high-risk systems and issuing conformity certificates. The structure, however, falls flat on its face in Article 43. Most Annex III high-risk AI systems would be certified in an Internal Conformity Assessment Procedure (all Annex 3 Point 2 to 8 systems) by the developers themselves. Annex II high-risk systems would be certified by their preexisting product safety conformity assessment bodies. Only Annex III Point 1 biometric recognition systems would have to be inspected by notified bodies.

5. Ex post controls
Market supervision is the fifth leg on which the regulatory regime stands and follows the EU product safety supervision template. The core of the approach lies in the supervisory authorities’ powers to force market actors to undertake corrective measures or withdraw systems from the market. The authorities have a right to take action in two cases.

The authorities may, first, force corrective measures or withdrawals if the AI system has the potential to affect adversely the health and safety or the protection of persons’ fundamental rights “beyond what is considered reasonable and acceptable” and does not conform to regulatory requirements. the authorities may carry out an evaluation. Second, if the system is compliant, the authorities can only act if the presents a risk to health or safety or persons or to the compliance with obligations under Union or national law intended to protect fundamental rights or to other aspects of public interest protection.

6. Sanctions
Penalties and fines cap off the regime. As the GDPR does, the proposed regulation gives authorities a big stick to yield to incentivize compliance. At their harshest, fines may amount to six percent of yearly turnaround, if the provider breaches obligations set out in Articles 5 and 10. Other breaches may result in fines of up to 4 percent of turnaround while refusing to provide pr providing incomplete or misleading information to supervisory authorities carries a lesser penalty of up to two percent of yearly turnaround. The supervisory authorities were given a large latitude in meting out punishment. How repressive enforcement will turn out to be, hinges on how the authorities will use their considerable sanction powers.


Should developers be worried?

The proposal has been subject to fierce criticism in early comments. However, on close inspection, the proposal seems more smoke than fire. Although the proposal is lengthy, proposes to put in place a convoluted regulatory regime, and, truth to be told, some of the obligations appear very expensive to comply with, the proposal does not place severe restrictions on AI development or use. Significant concerns will, nevertheless, linger.

  1. Will vague terminology leave room for extreme interpretations?

    Language in many articles is vague and open-ended and could facilitate stringent and arduous interpretations. Concerns over the extreme interpretations may have been overplayed. If the GDPR experience provides any guidance, authorities and courts will likely adopt middle-of-the-road approaches with subdued disruption potential.

  2. Will compliance become costly?

    Compliance costs are a significant concern. As the Regulation would require firms to develop multiple management systems, the proposal could trigger substantial investments in both human resources and technical infrastructures.

    Here, two issues emerge. As the Regulation only affects high-risk systems and the brunt of the requirements fall on providers, most AI developers and users would escape unscathed. Second, most large-scale software developers already have internal management and quality assurance systems. Consequently, the proposed systems would likely only complement existing processes. Thus, instead of assuming that everything is built from the ground up, compliance could be achieved using existing organizational structures. This is likely to mitigate compliance costs.

    Concerns over the fate of SMEs and startups are, however, legitimate. They will be handicapped compared to established, well-resourced players. Yet the alternative of imposing lax or no procedural controls would be suboptimal as well. If the providers do and could operate without management control systems, high-risk systems would be developed in disorganized, chaotic organizations incapable of governing themselves and documenting their actions.

  3. Is management based regulation sufficient?

    The regulatory regime relies heavily on management based regulation and is handicapped by its weaknesses. Management based approaches shift the regulatory focus from directly controlling behavioral outcomes to steering the processes that produce the outcomes. The approach does not articulate what outcomes are desirable, it regulates processes, making regulatory efficacy a function of the processes and their capability to force desired outcomes.

    Here, the weaknesses become apparent. If the processes are weak, badly designed, or transform into pro forma tick box exercises, regulation will have no force and may become perverted, legitimizing ethically flawed, exploitative products by fixing the CE stamp of approval on rotten systems.

  4. Is integrating fundamental rights to guide AI regulation one step too far?

    Fundamental rights figure prominently in the proposal. This leads to concerns. First, businesses and other private actors have traditionally fought against this horizontale Drittwirkung of fundamental rights. They have argued, successfully, that the rights should only bind states and private actors are best left to pursue their interests freely, bound only by explicit legislative rules.

    This seems to change in the proposal, although the details are murky. Developers would have to optimize their AI systems to minimize risks to fundamental rights.

    This feeds to the second fundamental rights concern. Optimizing an AI system to respect and protect fundamental rights is no trivial task. The task involves highly charged and uncertain value judgments and unavoidably bleeds into the realm of ethics. If the framing is taken seriously, it may force the developers to balance their commercial interests with community interests. Here, a cynic might suspect that money is likely to talk. And given the conformity assessment and enforcement models laid out in the proposal, fundamental rights safeguards may, in the end, be faltering at best.

  5. Will the rules be enforced?

    To cap off the soft substantive rules, the enforcement model fleshed out in the proposal appears relatively weak, in particular before placing the AI system on the market. Here, the weaknesses stem from reliance on internal conformity assessment procedures and limited external verification. Combined with the process-oriented ruleset, the ex ante controls may prove wildly inadequate.

    The ex post controls, in turn, seem capable of catering for ugly surprises. In particular, the supervisor’s power to force providers to withdraw dangerous AI systems even if they are compliant may cause significant uncertainty. The management based rules provide little direct normative guidance on, for example, what risk levels are acceptable. If the provider and the supervisor disagree, the provider loses. Considerable investments may go to waste.

    While the risk of uncertainty is real, its seriousness and significance are open to question. The bar for withdrawing a conforming AI system from the market is set high. If the authorities decide to intervene, the product is likely quite dangerous. This should soothe investor concerns and reduce the chilling effect uncertainty may have on actors’ willingness to invest in AI development.

  6. Will the sanctions be provisioned too aggressively or not agrresively enough?

    The final cause of concern arises out of the sanction provisions. They stack the deck in the favor of regulators and supervisors as the supervisors are given a large stick to use in ruling over their subjects. Used aggressively, sanction powers could cause significant disruptions, and the sanctions related risks could stifle innovation. However, if the GDPR experience provides any guidance, NGO activists will likely be sorely disappointed. The big stick may see little use.

So, should one worry? Maybe not. The ambiguities of the proposal do not threaten the use or development of most everyday AI applications. One should, however, prepare to get familiar with the EU Fundamental Rights Charter and make sure that supply plans for coffee and stationery are up-to-date. And if your organization has no risk management and quality assurance systems, now is the time to start building an AI governance framework. You will need it, even if inevitable happens and lobbyists manage to water the proposal down.

Leave a Comment