AI and Copyright in Conflict: Legal Tensions Behind the UK’s Opt-Out Proposal

Copyright in the Age of AI: Legal and Creative Tensions Behind the UK Opt-Out Proposal

Copyright in the Age of AI: Legal and Creative Tensions Behind the UK Opt-Out Proposal

In late 2024, the UK Government announced a striking new policy proposal: artificial intelligence
(AI) developers would be permitted to train their models on copyrighted works unless rights
holders explicitly opt out. This proposal—outlined in a public consultation spearheaded by the
Department for Science, Innovation and Technology—represents a sharp pivot from the current
legal position and signals a growing tension between innovation and creative rights in the digital
age.
While the Governments goal is clear—cementing the UK as a global leader in AI
development—the proposed opt-out scheme has provoked considerable backlash from artists,
authors, musicians, and other creative professionals. Over 1,000 musicians, including Kate
Bush and Annie Lennox, released a silent protest album titled ‘Is This What We Want?’,
condemning what they view as the quiet erosion of artistic rights.
At the heart of this debate lies a fundamental legal and philosophical question: Is our control
over creative property at risk in the age of artificial intelligence?

The Legal Status Quo: Copyright and Consent

Under current UK law, copyright protection arises automatically upon the creation of a qualifying
work, pursuant to the Copyright, Designs and Patents Act 1988 (CDPA). This includes original
literary, dramatic, musical or artistic works, sound recordings, films and broadcasts. Once
protected, creators enjoy exclusive rights to copy, distribute, perform, or adapt their work.
Crucially, this means that AI developers must currently obtain permission from copyright holders
to use their work in training data. Use without consent generally constitutes infringement, unless
a statutory exception applies. Typically, such permission is granted through licensing
agreements, often in exchange for remuneration.
The proposed opt-out scheme reverses this dynamic. Rather than requiring permission, it
places the burden on creators to actively object. But in the digital landscape, where copyrighted
content is scraped at scale and often without attribution, the ability to know when one’s work has
been used—let alone opt out—may be practically impossible.

The Fair Use Debate: Is AI Training a Permitted Exception?

The proposal also raises a fraught question: is AI training covered by any of the existing
statutory exceptions to copyright?
Under sections 29 and 30 of the CDPA, limited exceptions exist for uses such as research,
private study, criticism, review, and quotation. These are sometimes referred to as the UKs
version of “fair dealing”, similar to the broader “fair use” doctrine in the US. In 2014, a new

exception under section 29A was introduced to permit text and data mining (TDM) for non-
commercial research.
But the application of these exceptions to commercial AI training remains legally uncertain. Most
AI developers—including those creating large language models and generative tools—are
commercial enterprises. The training of such systems on large datasets of copyrighted material
may not qualify as “non-commercial research”, nor does it fall comfortably within the categories
of criticism or review.
The case law here is sparse, but the UK Intellectual Property Office (IPO) has made clear that
TDM exceptions do not extend to commercial AI developers. In fact, in early 2023, following a
similar backlash, the Government scrapped a prior plan to introduce a broad TDM exception for
all purposes, acknowledging that it had not struck the right balance between creators and tech
firms.

Getty Images v Stability AI

Beginning on 9 June 2025, Getty Images is bringing a landmark lawsuit against Stability AI in
London’s High Court, alleging that Stability scraped and used over 12 million copyrighted
images—including watermarked files—to train its “Stable Diffusion” AI model without permission
or payment. The trial, set to run until late June 2025, is seen as a pivotal test of how UK
copyright law applies to AI training—and its outcome could reshape licensing requirements,
creative industry practices, and future regulation.

The Opt-Out Model: A Practical and Ethical Dilemma

Technology Secretary Peter Kyle has described the opt-out model as a “starting point” and not a
finished solution. But for many in the creative industries, the model is fundamentally flawed.
First, tracking use of work across the internet and across AI training datasets is extremely
difficult without radical transparency. Most AI developers do not disclose the exact datasets they
use. As a result, creators may have no knowledge that their works have been ingested by an AI
model—let alone the ability to prevent or control it.
Second, the lack of a licensing or remuneration framework risks undermining the commercial
value of creative work. Without a requirement to pay for use, tech companies may build
powerful tools—and profitable services—on the back of unlicensed, uncompensated creative
labour.
Finally, there are reputational concerns. Generative AI models may produce outputs that are
offensive, misleading, or false, while mimicking an artist’s voice, style, or likeness. If the training
data includes protected works without the creators knowledge or approval, there is real risk of
misattribution, reputational harm, or dilution of creative identity.

Toward a Balanced Legal Framework: Alternatives and Next Steps

Legal commentators and rights groups have suggested more balanced alternatives. Chief
among them is a statutory licensing scheme, modelled on systems already in place in the music
and broadcasting industries. Such a scheme could permit AI training on protected content but
require developers to pay license fees to a collecting society or rights management body.
Another suggestion is enhanced transparency legislation. This could require AI companies to
disclose their training datasets or maintain registers of included work. This would allow creators
to identify when and how their works are used—and to object or seek compensation.
Whatever path is taken, there is a clear need for meaningful safeguards: licensing,
transparency, and accountability must be embedded into any future framework. As the UK looks
to embrace the AI revolution, it must not do so at the expense of the very creativity it seeks to
amplify.

Conclusion: Innovation Must Not Eclipse Ownership

The UK Government’s opt-out proposal may be well-intentioned, but as it stands, it risks tilting
the balance too far in favour of tech developers. Without robust protections, creators face the
unauthorised use of their work, the erosion of their commercial rights, and loss of control over
how their identity and art are repurposed in the AI age.
As courts, lawmakers and industry bodies grapple with the implications of AI, one principle must
remain clear: innovation cannot be sustainable if it comes at the cost of ownership, authorship,
and fairness.

Contact Us

If you have questions about protecting your rights in the evolving AI landscape or need legal advice on issues related to intellectual property and AI, our specialist team at Taylor Hampton is here to help.

TAYLOR HAMPTON
T: +44 (0)20 7427 5970
E: [email protected]
🌐 www.taylorhampton.co.uk
Search
Archive

For all enquiries please call Taylor Hampton on +44 20 7427 5970