The European Union’s AI Regulation

Written by Onur Bakiner, Faculty Fellow and Associate Professor of Political Science
May 12, 2022

The European Commission’s proposed Artificial Intelligence Act was announced on April 21, 2021 as the 27-member European Union’s (EU) response to the challenges of artificial intelligence (AI) technologies. The regulation, which, if passed, will be enforceable law across the region, built upon the work of a High-Level Expert Group that published its Ethics Guidelines for Trustworthy AI in 2019 and the White Paper on Artificial Intelligence in 2020.

Before describing the specifics of the proposal, it is important to note two aspects of European-level regulation of digital technologies. First, new laws come on top of an established field of regulation. In European lawmakers’ words, the objective of new laws is to fill gaps rather than fill a vacuum. In fact, the General Data Protection Regulation (GDPR) covers relevant issues, as will the proposed Digital Markets Act and Digital Services Act when they are implemented. Second, the architects of EU legislation see themselves as influential first movers in the world of digital ethics, data privacy and AI legislation. The worldwide attention to the GDPR seems to have accentuated this self-perception.

A New Approach to AI Risks

The proposed regulation defines AI broadly as systems driven by machine learning algorithms and expert systems. It adopts a risk-based approach to AI. In other words, instead of identifying problems and solutions on a sector-by-sector basis or ascribing roles and responsibilities to large enterprises only, the Act disaggregates the deployment and development of AI tools on the basis of potential risks they pose to fundamental rights. Annexes specify risk levels for AI applications, with the understanding that the list will be renewed every year.

The four-tiered approach to risk envisions the following categorization:

  • Unacceptable risk: applications that should be banned because they pose unmitigable risk to fundamental freedoms
  • High risk: applications that should be permitted but only in compliance with reporting requirements and conformity assessments
  • Limited risk: applications that should be permitted but subject to specific information and transparency requirements
  • Minimal or no risk: applications that should be permitted with no restrictions.

From a business perspective, the need to balance risk and opportunity underlies the EU approach to AI regulation.

Guarding against the worry that regulation stifles innovation, European lawmakers offer three responses. First, they acknowledge that there could be irreconcilable conflict between business interests and other socially desirable goals in some cases, and draw the line when business practices pose serious risks for human well-being. Second, they argue that regulation is actually good for business, as legal certainty alleviates the challenges of a fast-changing technological landscape. Quoting the White Paper’s motto, the Act’s objective is to create an “ecosystem of trust and excellence”, thereby emphasizing the synergies between a humane social environment and business competence. Third, the Act has built-in mechanisms for regulatory sandboxes and support for small and medium-sized enterprises (SMEs). In other words, proscription of risky practices and promotion of innovation are presented as two pillars of AI legislation.

Criticisms and Implications for American Business

Commentary on the Act emphasizes the high degree of buy-in from academia, business and civil society in the drafting phase. For example, public consultations with representatives of these sectors are credited for the shift away from the two-tiered approach to risk, which had been laid out by the High-Level Expert Group in 2019, to the four-tiered one. Nonetheless, numerous objections have been raised. The extra regulatory burden, as discussed above, is one of them. Representatives of SMEs ask for more clarity in the definition of an AI system, the allocation of liability down the value chain, and the anticipated costs of impact assessment and compliance.

EU logo with security sign
                        Image Credit: pxhere

A related criticism concerns the difficulty of operationalizing and thus complying with some of the standards set in the document, like “a high-quality dataset” or “a dataset free of errors”. Another definitional problem that should be ironed out in the context of implementation is specifying what constitutes “manipulative” or “exploitative” – and therefore banned – use of AI. Finally, the Regulation leaves out AI systems developed or used exclusively for military purposes on the grounds that the regulation of those systems falls under the EU’s Common Foreign and Security Policy.

So, where are things at right now? The EU has a complex legislative process in which the Commission, serving as the equivalent of the executive branch, proposes new laws that are then debated in the 705-member European Parliament and ultimately negotiated with a third institution, the Council of the European Union. The lengthy process, which results from the EU’s peculiar mix of supranational governance and respect for individual member states’ sensitivities, means that the Regulation will likely be adopted in 2023, not including an additional grace period for full adaptation and implementation. Olga Hamana, a Frankfurt-based dispute resolution lawyer, recommends that companies anxious to get a head start in order to do right by the Regulation can seek legal advice, establish internal ethics boards, incorporate the principles of transparency into the design of AI systems, deploy explainable AI systems, and conduct self-assessments.

And, for American business, the scope of the Act covers the developers and users of systems located in the EU, but also systems affecting people located in the EU. Penalties on American big-tech companies that failed to comply with EU-level rules were a source of friction in the past. The EU-US Trade and Technology Council’s inaugural September 2021 meeting produced a pledge for close coordination for trustworthy, innovative and rights-respecting AI, but concrete standards will be debated in future meetings. For now, one thing is for sure: US-based companies operating in the EU market and developing or using AI technologies should get ready for the Act.

 

Share on Facebook and LinkedIn!