Technology
Is the proposed European AI Act innovation friendly?
Snapshot
- The EU's proposals for regulating AI are specifically designed to boost consumer trust and to support the AI ecosystem as a result by growing demand
- But the compliance obligations for AI systems falling in the "high risk" category will be burdensome and may be complex to meet
- Whether the right balance has been struck between protecting EU interests while supporting innovation and investment will depend on details still to be decided and will drive how far the rest of the world follows the EU's regulatory lead
As real-world applications of artificial intelligence (AI) are now all around us and better understood, regulators are developing a nuanced approach to what requires regulation and what does not. The most developed and complete response to date to the challenge of regulating AI is the EU's AI Act. But how effective will it be at optimising innovation, while avoiding the potential negative impacts of regulating?
Further Osborne Clarke Insights
Stimulating or blocking ideas?
In 2019, a UK white paper encapsulated the challenge for legislators seeking to shape effective regulation for transformative technology. The government paper stated: "Regulation has a powerful impact on innovation. It can stimulate ideas and can block their implementation. It can increase or reduce investment risk – and steer funding towards valuable R&D or tick-box compliance. It can influence consumer confidence and demand – and determine whether firms enter or exit a market."
Given the risk that an expansive regulatory regime could stifle innovation, the European Commission has aimed to take a proportionate approach, focusing its legislative proposal on areas deemed higher risk. Under its tiered approach, the higher the level of perceived risk, the more significant the regulatory obligations.
The AI Act's first tier applies to applications considered to carry unacceptable risk; as such, they are banned outright. This includes real-time facial recognition systems for law enforcement in public places, social scoring systems, and AI that uses subliminal techniques or seeks to exploit specific groups' vulnerabilities.
The second tier concerns AI applications defined as "high risk": the Act's main focus. High-risk applications will be subject to an extensive and detailed regime, with strong enforcement powers and sanctions. Classification as high risk can flow from general and sector-specific product regulation or from the Act itself, which specifies applications that could affect health and safety or fundamental rights in areas such as credit scoring, critical infrastructure, education, recruitment, task allocation or assessment in the workplace, law enforcement and the justice system. Codes of conduct are proposed for lower-risk systems that voluntarily adhere to the high-risk regulatory approach.
A third tier of regulation applies to all AI applications and imposes overarching transparency requirements. Users must be informed that they are interacting with an AI system or AI-generated content where that is not obvious from the context. As drafted, this requirement does not impose a significant compliance burden on innovators.
Outside these categories, the AI Act would have no direct impact. However, legislators are wrestling with the definition of AI, the categories of high risk and whether to regulate "general purpose" AI systems, such as cloud-based text or image generation systems that are built into a huge number of other applications.
Increasing or reducing investment?
The objective of the EU's AI strategy is to grow its ecosystem, investment and excellence. The Act's main focus is to ensure that AI applications available in EU markets and to its citizens respect the union's rights and values and are trustworthy. The view of the Commission is that the new legislation is pro-innovation because it provides legal certainty for businesses and investors.
The "sandbox" provisions in the AI Act are intended to support investment by creating controlled environments for innovative systems to be tested and developed under active supervision by regulators before they are placed on the market. Akin to the Financial Conduct Authority's sandbox in the UK, this would facilitate innovators to design compliance into their new AI tools. Sandboxes enable regulators to grow their understanding of technology.
A pilot for the AI Act sandbox was launched in June 2022 between the Commission and Spanish government to develop best practice guidance.
Valuable R&D or tick-box compliance?
Businesses are concerned about the compliance burden that the AI Act is likely to impose for high-risk systems. Part of this concern stems from the binary nature of some of these compliance hurdles, such as the need to avoid bias in data sets and to ensure transparency or "logging by design" in AI decision-making. The extent of these burdens will depend on the conformity mechanisms to be determined by EU Member States.
The AI Act is detailed and prescriptive. It does not take a flexible outcomes-based approach, but instead requires specific actions, detailed documentation and obligatory registrations. It is a cross-sector framework, not tailored to context or sectors, which is a consequence of the wider ambition in the EU's digital strategy to set the gold standard for tech regulation.
Beyond the EU
The UK is proposing a "light touch" decentralised approach. This lets existing regulators use their existing powers to tackle AI applications that fall within their sector or legislative remit, with consistency achieved by setting high-level principles and through regulatory co-ordination. An AI governance white paper with more detailed proposals is expected soon.
The US has issued a "Blueprint for an AI Bill of Rights", taking a similar high-level approach to the UK of setting out principles rather than legislating. Separately, the Algorithmic Assessment Act has been proposed by bipartisan federal lawmakers, which would be more prescriptive about actions, although it remains unclear whether this will become law. Canada has taken steps to publish its draft Artificial Intelligence and Data Act, which is similar to the EU's initiative.
There is a risk that the EU's approach has a tick-box element, due to the absence of flexibility and limited scope for sector-specific tailoring.
Consumer confidence and demand
The Commission considers the AI Act to be pro-innovation because trustworthy AI will grow demand. For the majority of permitted AI systems, the Act requires no action or merely a transparency requirement. The more complex requirements for high-risk systems, while more detrimental to innovation and less tangible for consumers, seek to ensure consumer trust.
A lack of regulation of AI would more likely have a negative impact on consumer confidence. Moreover, bringing AI systems within the product regulation framework and requiring the use of the CE mark to demonstrate conformity provides a familiar and visible indicator to encourage trust in compliant systems. Regulation is, after all, a tool for ensuring that market-driven corporate priorities are balanced with wider public-interest considerations such as consumer protection, ethics and societal concerns.
Market entry
The compliance burden that the AI Act creates may stop some providers from selling into the EU. But the EU offers one of the largest markets in the world, with harmonised regulatory standards. For many, this will create the necessary economy of scale to warrant the upfront investment in compliance costs for high-risk AI tools. Will the "Brussels effect" seen with other regulation be strong enough to achieve the EU's stated goal of setting a global gold standard?
The compliance deadline for the Act is not expected before early 2025. The Commission will review its impact three years later and every four years thereafter. Looking back in, say, 2030, will this regulation have delivered on its intention of growing a strong and beneficial ecosystem of trustworthy AI in the EU?
Authors

Catherine Hammon, Lead author Digital Transformation Knowledge Lawyer,
Head of Advisory Knowledge, UK catherine.hammon@osborneclarke.com +44 207 105 7438

Benjamin Docquir Partner, Belgium benjamin.docquir@osborneclarke.com +32 2 515 93 36

Katie Vickery Partner, UK katie.vickery@osborneclarke.com +44 20 7105 7250

Jonathan Kirschke-Biller Associate, Germany jonathan.kirschke-biller@osborneclarke.com +49 40 55436 4086

John Buyers Partner, UK / Middle East john.buyers@osborneclarke.com +44 20 7105 7105

Dr Jens Schefzig Partner, Germany jens.schefzig@osborneclarke.com +49 40 55436 4054

Tom Sharpe Associate Director, UK tom.sharpe@osborneclarke.com +44 207 105 7808