Media

Media
Media
As AI reshapes the media and entertainment industries, what are the key issues and challenges?
- AI technology – in particular, machine learning and generative AI technology – is central to the media and entertainment industries
- The benefits include creative (and cost effective) solutions to production problems faced by the industry
- The main challenge is how to balance the competing interests of rightsholders, producers and talent with the benefits
Generative artificial intelligence (AI) has the potential to significantly reshape the media and entertainment industries. Some would argue that it already has – in recent years, the creative industries have entered a golden age in which AI-assisted producers, filmmakers and artists can realise their visions at a faster pace and greater scale than ever before. Others worry that AI is the latest and greatest battle between larger human creativity versus machine to the detriment of creators and rightsholders.
Just as the uses of AI are evolving at pace, so too are the commercial, legal and regulatory challenges for rightsholders, producers, and talent. What are the principal issues and challenges faced by the industry?
Innovation pathfinder
The media and entertainment industry has long been an early adopter of innovative technology; AI is no different.
In the pre-production phase, AI-powered text generation tools can assist in scriptwriting, while image-generation tools can be used to create concept shots and storyboards. During production, AI-powered game development engines can be used to create interactive virtual sets, which not only allows for greater creative risk-taking, but also reduces the carbon footprint associated with large-scale physical sets and location shoots. In post-production, the visual effects industry can use code-generation tools to streamline modelling and animation. In particular, machine learning (ML) has enhanced the dubbing process and enabled the use of deepfakes for faster and more accurate results.
AI's potential extends beyond content creation. It can be used to automate administrative tasks, freeing up the creative workforce for strategic and innovative work. Distribution managers have also been able to produce generative AI text and images for marketing materials and conduct ML-based audience analysis on new releases, enabling them to strategically schedule TV and content to target advertising more effectively.
However, the generative AI race comes at a cost in an industry that is still recovering from the Covid-19 pandemic, the 2023 Writers Guild of America (WGA) strike, changes in content consumption habits, and low-investment spending. In this environment, choosing whether or not to acquire capability or develop AI in-house is a difficult choice. Many in the industry also have competing concerns about the implications of generative AI for rightsholders, talent and the broader workforce.
Challenges for rightsholders
AI tools – especially generative AI – are usually trained on extremely large data sets obtained through web-scraping. However, the data and content that is used in this training process is usually protected by IP rights, including copyright. This can quickly lead to complex legal issues; for example, in the case of music, where there are multiple rightsholders for a single track.
Several rightsholders have commenced litigation against AI providers. Getty Images launched proceedings against Stability AI, the company behind the text-to-image AI model Stable Diffusion in both the US and English courts. Music publishers Universal Music, ABKCO and Concord Publishing filed a lawsuit against Anthropic in the US, alleging that the company unlawfully replicated lyrics as part of the training data set used for Claude AI. Such litigation is often complex, particularly against the backdrop of divergent domestic copyright regimes implemented by each country and the respective industry's specific conventions.
Some rightsholders have chosen to pursue licensing arrangements. For example, the Associated Press reached a deal with OpenAI to license its content for training purposes. Licensing bodies and collective management organisations are likely to serve an increasingly important role in the industry given the prevalence of AI tools.
AI developers, such as OpenAI, which is behind the text-to-image model DALL-E, have taken to creating a new array of tools that enable rightsholders to exclude certain assets from training data sets. However, this process can be burdensome for rightsholders with particularly extensive portfolios and typically applies on a nonretroactive basis only.
Procuring and using AI tools
Procuring and using AI tools poses a host of challenges for the media and entertainment industry, depending on how the tool is trained, procured, deployed and used. For example:
- IP ownership. Does the output of generative AI tools qualify for IP protection, and the identity of the copyright owner?
- IP infringement. AI generated content may infringe a third party's IP rights – this risk is even more acute where such tools are used to create commissioned content, which is backed by contractual assurances such as IP warranties and indemnities.
- Inaccuracies, bias and discrimination. ML-based decision-making can pose major issues where the AI model produces biased, inaccurate and discriminatory results.
- Confidentiality. It is not always clear how content inputted into an AI tool will be used nor who will have access to that information. Inputs may be logged and retained by the AI developer and potentially used to adapt or refine the AI.

The broader picture
Hollywood's 2023 strikes were initiated by two of the most prominent representative bodies – the WGA and SAG-AFRA (the Screen Actors Guild-American Federation of Television and Radio Artists), who protested against the use of AI to generate scripts and replicate the voices of performance artists. In the music industry, major labels have also been up in arms about the use of generative AI in creating songs that mimic the styles of some of the most streamed artists.
The increasingly low barrier to creating AI-generated performances and deepfakes is an evolving issue. In India, for example, the surge in AI uptake by the creative industry has brought to the fore complex discussions on personality and image rights and ethical licensing models. The market has been inundated with AI-powered tools that enable users to replicate the voices of popular singers and even deploy deepfakes of celebrities – including those no longer alive. Such use may potentially infringe the moral rights of artists, as well as their rights to privacy and personality. Leading names of the Indian film industry have been forced to take pre-emptive steps to ensure that there is no unauthorised use of their likeness.
Studios engaging with talent will increasingly need to grapple with "AI clauses" in talent agreements, which seek to prohibit the use of AI tools in connection with an actor's voice and likeness. The ethical and financial implications for talent and studios alike presents a complex starting point for negotiations, especially in the absence of a clear market standard.
Authors
Jamie Heatly Associate Director, UK jamie.heatly@osborneclarke.com
Robert Guthrie Partner, UK robert.guthrie@osborneclarke.com
Gianluigi Marino Partner, Italy gianluigi.marino@osborneclarke.com

Sandhya Surendran Partner, India sandhya.surendran@btgadvaya.com
Emily Tombs Senior Associate (New Zealand Qualified), UK emily.tombs@osborneclarke.com
Ken Wilkinson Partner, UK ken.wilkinson@osborneclarke.com