Technology

Technology

Technology

A global regulatory revolution? Disruptive tech and AI pose novel legal challenges
What do the rises of AI and of source-available licences hold for the software world?
Dealmaker, regulatory and competition trends in TMC M&A in the UK and Europe
India's tech M&A activity set for 2024 in resilient and confident shape
Europe recognises the benefit of incentives for tech growth companies
How to retain workforce flexibility as the use of gig workers and independent contractors faces regulatory attack
Commercialising internally developed digital products: an opportunity or risk?
SEPs: The rising global significance of patent rights in everyday technology

A global regulatory revolution? Disruptive tech and AI pose novel legal challenges

  • Governments are very conscious that the growth and enhanced productivity that AI promises will be lost without the trust of consumers and businesses
  • AI regulation will impact neurotechnologies and questions around intellectual property, data protection and privacy will be fundamental
  • Quantum threat –"Q day" is when quantum computing will break current cybersecurity, widely predicted to take more than five years but the risk is coming into sharp focus

AI

It is impossible to discuss new technology and forthcoming legal issues without addressing artificial intelligence (AI). The emergence of ChatGPT just over a year ago demonstrated that AI is capable of paradigm shifts. A flurry of activity at national and international level followed in 2023 to put in place codes of conduct, voluntary pacts, directions to regulators and government bodies – and the EU reached political agreement on the shape of its global benchmark AI Act.

A less well known example of AI legislation is the UK's Automated Vehicles Bill. In contrast to the "horizontal" AI Act that was hammered out in an intense final 38-hour negotiation in December, the Automated Vehicles Bill was created after four years of detailed analysis by the Law Commissions of England and Wales, and Scotland.

It will be seen over the next five years how effective this combination of regulatory obligations and voluntary commitments proves in ensuring safety, transparency and accountability around AI systems. Undoubtedly, there will be litigation, and is particularly evident in the intellectual property field. There are likely to be further rounds of legislation. Announcements on UK government policy on AI are expected early in 2024 following on from the AI white paper of March 2023. EU institutions have been mulling possible legislation on AI in the workplace, in addition to the provisions of the AI Act.

Growth, regulation and trust

Governments are very conscious that the growth and enhanced productivity that AI promises will be lost without the trust of consumers and businesses. Tools need to be safe and used appropriately and when harm occurs, there should be effective enforcement and adequate avenues for redress. Enactment of the EU's AI liability directive, though likely to be delayed until after the European Parliament election in June 2024, is intended to reduce the evidential and legal challenges in proving liability for harm caused by an AI system.

Developments in AI regulation in the UK, EU and US are being closely watched across the world. The "Brussels effect" may mean that the AI Act is treated as a regulatory gold standard, much as the EU General Data Protection Regulation (GDPR) has become over the past five years. The drivers for regulating AI differ, though, in different jurisdictions they face differing challenges.

In India, the proliferation of generative AI content, such as deepfake videos, have led to calls to limit the spread of “harmful” AI. There may be a (narrower) focus on regulating AI-led harms, building on the foundational pillars of detection, prevention, reporting and education. In any case, some level of regulation is inevitable. India’s Ministry of Electronics and Information Technology has designated seven working groups to develop the architecture, vision and objectives of India’s AI strategy.

Capture that thought

Neural interfaces have been in development for some time – a high profile one being Musk's Neuralink with a sewing machine-type device that inserts probes into the skull. Non-invasive techniques also include skull-caps reading electrical activity in the brain. An initial focus of this technology has been on medical therapeutic applications. But others maintain a wider vision based on a direct interface between digital processing and the human brain, bypassing screens – and potentially the senses – to become one of the primary means of perception.

The interface with medical regulation for invasive neurotechnologies is clear. Currently, devices would only be authorised for therapeutic purposes – for example, smart prosthetics controlled by thoughts, or invasive deep brain stimulation to treat Parkinson’s, Tourette’s and epilepsy. Reading signals from within the brain for other purposes would need a change in regulation.

It is difficult to envisage that this field will not be regulated. In particular AI regulation will surely impact neurotechnologies. For example, the recently agreed AI Act will introduce bans on AI used for social scoring, for exploitation of vulnerabilities and use of subliminal techniques, for biometric categorisation of people, and for emotion recognition in the workplace and education settings. There is clear relevance of these bans to the kind of applications that neurotechnologies could enable.

IP, data protection and neurotech

Interesting intellectual property questions arise. Copyright applies to the form of an expression of an original idea, not the idea itself. That would usually require some form of action to turn the idea into a "work". But if someone daydreams the lines of a poem, or hums an original tune, and those thoughts were captured and recorded as neurodata, would the neurodata have copyright protection? These issues will need careful analysis.

Another fundamental consideration is how neurotechnologies interact with data protection and privacy. As a mark of the progress in these technologies, the UK Information Commissioner's Office (ICO) has recently published research into neurotechnologies and neurodata, which is "information that is directly produced by the brain and nervous system". It highlighted three main concerns.

First, discrimination is a risk, particularly in non-medical applications such as in the workplace and through new forms of "neurodiscrimination" not yet defined. Secondly, the ICO noted the need for understanding of and transparency about these technologies. If people are to give properly informed consent to the processing of their data, it is likely to be special category data requiring additional layers of data protection compliance. Finally, it emphasised the need for "regulatory co-operation and clarity in an area that is scientifically, ethically and legally complex".

Quantum threat

Quantum computing continues to progress and build capacity. One of the areas that will be affected by the increase in computing power that quantum processing will deliver is cybersecurity. Quantum computers are expected to be powerful enough soon to unravel the data encryption techniques used for transmitting and storing data and in access and authorisation systems.

Today's "public key cryptography" method relies on asymmetric encryption techniques, such as the RSA (Rivest-Shamir-Adleman) algorithm and is considered uncrackable – the calculations needed to break it are so huge that they are impossible at any practical level, even with high-performance classical computers. But the advent of quantum computers at sufficient scale will remove this impossibility.

"Q day" is the day on which quantum computing will break current cybersecurity. Although most commentators predict that this will take more than five years, the risk is already coming into sharp focus.

Businesses that process personal data have an obligation under the EU GDPR and its UK equivalent to take "appropriate technical and organisational measures" to ensure a level of security to protect that data, appropriate to the risk. This includes taking into account latest security techniques and acting in proportion to the scale of the risk. Under network and information systems regulation in the EU and UK, a broadly similar test applies to businesses that provide critical infrastructure, such as transport or utilities businesses, and some digital services providers.

Jurisdictions across the world have emulated this “reasonable security” standard, including India's Digital Personal Data Protection Act in 2023.

Cybersecurity compliance

New post-quantum encryption standards for cybersecurity are being developed by organisations such as the US National Institute of Standards and Technology, with the first drafts published in August 2023. The UK's National Cyber Security Centre published guidance in November 2023 on migrating to post-quantum encryption. The state of the art in cybersecurity now includes post-quantum encryption.

Malicious actors are known to be stealing encrypted data with enduring value in order to crack it later or sell it to an organisation that can. Many TMC products and services – particularly communications infrastructure or digital infrastructure including data storage – may be targets for hostile nation-state actors.

Consequently, updating compliance with legal obligations around information, systems and network security is imperative.

Further Osborne Clarke Insights

The EU's AI Act: what do we know so far about the agreed text?

Generative AI litigation – should this be a concern for users of AI tools?

What does the UK's white paper on AI propose and will it work?

EU proposes new approach to liability for artificial intelligence systems

Authors

Catherine Hammon Digital Transformation Knowledge Lawyer, UK catherine.hammon@osborneclarke.com

Laurens Dauwe Partner, Belgium laurens.dauwe@osborneclarke.com

Peter Rudd-Clarke Partner, UK peter.ruddclarke@osborneclarke.com

Roger Segarra Partner, Spain roger.segarra@osborneclarke.com

Ayan Sharma Senior Associate, India ayan.sharma@btgadvaya.com

Vikram Jeet Singh Partner, India vikramjeet.singh@btgadvaya.com

Mark Taylor Partner, UK mark.taylor@osborneclarke.com

Back to top