Communications

Communications
Communications
Don't let customer service generative-AI chatbots become skeletons in the closet: the risks of outsourcing
- Businesses have started utilising chatbot systems based on generative AI large language models to provide better, faster and cheaper 24/7 customer services
- Businesses need to be aware of and manage the rapidly evolving and complex regulatory landscape
- The legal risks associated with outsourcing customer services (such as consumer protection compliance, competition and overall business continuity) need to be considered and mitigated in the agreement with the supplier
With streamlined support, personable tailored responses and enhanced consumer experience, artificial intelligence (AI)-powered chatbots have captured the attention of businesses worldwide.
It is easy to see why large language model (LLM)-powered chatbots are attractive to businesses with heavy customer interactions and low margins. The efficiencies of reduced costs and potentially more user-friendly outcomes can be significant. However, the use of LLM-powered chatbots also poses substantial legal risks for the unwary. Many smaller businesses do not have the internal expertise to produce and maintain their own generative AI chatbots and are increasingly outsourcing to specialist providers. However, outsourcing means that control over the chatbot is reduced, which means legal compliance can become challenging. How can businesses mitigate these risks?
Contracting for compliance
Consumer regulators across the EU and the UK expect businesses to treat customers fairly and comply with consumer protection rules – and may enforce these rules against businesses that do not. When a business implements a chatbot to interact with consumers, that business is responsible for those interactions under consumer protection law, even if the chatbot is supplied by a third party. It is, therefore, important for businesses to implement and flow-down appropriate "guardrails" to providers that minimise the potential for the bot to mislead consumers or otherwise treat them unfairly.
A core principle of most consumer laws is that users must be presented with material information in a truthful and transparent manner. In the context of AI chatbots, this is likely to mean that consumers must be properly informed that they are interacting with an automated system and not a human operator. This is often readily dealt with. However, some risks are more difficult to manage.
In particular, a major risk is that the AI chatbot could generate answers that do not give a full picture or are – at times – completely wrong or irrelevant (often referred to as "hallucinations"). Another is that the chatbot may present its output in a manner that may "nudge" (or even compel) users towards a certain choice they would not otherwise have made.
Even knowledgeable providers might not be able to eliminate these potentially negative outcomes given the current state of play of LLM chatbot models. Therefore, businesses need to ensure that consumers will be presented with a clearly worded and conspicuous disclaimer which considers the company's specific business context and customer base. This should advise consumers that, however natural the chatbot sounds, it is not human and that there is a risk of spurious answers, preferably with options to allow them to report the problem and quickly obtain human help.
While businesses cannot rely on a disclaimer to avoid consumer protection responsibilities altogether, it can reduce the risk of consumers being misled or alleging unfair treatment. This transparency is a key feature of the newly agreed EU AI Act.
Ensuring business continuity
Outsourcing customer services using LLM-powered chatbots may be convenient, but unless a business has paid significant amounts to have a chatbot developed exclusively for it, it is unlikely to be able to acquire the chatbot at the end of the contract. If this is not addressed at the outset, this can raise business continuity risks.
The way that chatbots are trained and optimised creates unique issues. For example, the provider of the LLM-powered chatbot may use data about the chatbot's outputs (and related user responses and interactions) to optimise the way that it functions. However, improvements cannot usually simply be "given" to a new provider, as they are often inextricably integrated into the weightings and internal configurations of the underlying chatbot system.
Instead, businesses may need to oblige the provider contractually to hand over all acquired data to them or their new provider in order that a new system can be "trained" using that data. If this cannot be achieved, whether for contractual or practical reasons, then the business may suffer a significant reduction in the quality of the new chatbot's functionality, while the new provider's system gathers sufficient data to reoptimise.
Businesses, therefore, need to explore precisely the scope of the AI in use by the provider and agree clear exit responsibilities addressing the AI issues.
Avoiding the perils of competition law
Competition law generally prohibits agreements and concerted practices between companies if these appreciably restrict competition. Where a provider supplies the same AI chatbot system to more than one company, competition law risks may arise. For example, because LLM-powered models rely on training data to generate their responses, companies that support training by providing customer service data to the provider should ensure that commercially sensitive data – especially between competitors – is not exchanged (via the provider) during this training process.
There is also the risk of an artificial alignment of service quality or other business parameters and the risk of an illicit information exchange, if data harvested from the operations of different customers' chatbots is collated by the provider and used in either the initial training of future versions or in their ongoing optimisation.
To mitigate these competition risks, companies should include competition law considerations when conducting due diligence on their intended chatbot system.
Generative AI chatbots powered by LLMs are powerful solutions offering a range of benefits. However, their interactions with customers raise a number of nuanced considerations and significant regulatory risks that could ultimately harm a business's operations and reputation.
To reap the benefits and reduce the risks, it is important to implement appropriate guardrails with outsourced providers, to know when to apply the brakes and to remain conscious of the rapidly evolving regulatory constraints.
Further Osborne Clarke Insights
Authors
Eleanor Williams Associate Director, UK eleanor.williams@osborneclarke.com
Fabrizio Ducci Foreign Associate, NY, Italy fabrizio.ducci@osborneclarke.com
Jon Fell Partner, UK jon.fell@osborneclarke.com
Mike Freer Partner, UK mike.freer@osborneclarke.com
Tamara Quinn Partner, UK tamara.quinn@osborneclarke.com
Manuel Schuster Associate, Germany manuel.schuster@osborneclarke.com