What are deepfakes?
The age of Artificial Intelligence is upon us and is here to stay. Generative AI tools are becoming increasingly accessible, providing convenience, efficiency and newfound functionality to all internet users. Documents, presentation slides and even creative artwork can now be produced in a matter of seconds. The ability to create something new is a powerful one and is not always used for good. The clearest and most problematic a abuse of these tools comes in the form of Deepfakes.
A “Deepfake” is a form of audiovisual content that has been generated or edited by an AI showing someone saying or doing something that is fake. An AI analyses images, videos and audio clips of an individual to create a mimic of their likeness, voice and behaviour in new scenarios. In theory, any average internet user can take existing online content and create a deepfake showing anyone doing anything.
Deepfakes are frequently used to impersonate celebrities, politicians or music artists, and are all too commonly used for unlawful activity. Many viral deepfakes have circulated in recent years: a fake pentagon explosion in May 2023, a fake interview of Elon Musk aimed at scamming people, fake explicit images of celebrities and President Zelenskyy of Ukraine ordering Ukrainian troops to surrender are but small sample of the variety of deeply problematic material that Deepfakes can show.
The Dangers of Deepfakes – Defamation and Reputational Harm
It is clear to see how Deepfakes can be used to seriously harm the reputation of individuals or companies, with catastrophic consequences, be they emotional, financial or political.
There is no question that a Deepfake can be defamatory of the individual portrayed. There is no doubt that a Deepfake is a ‘statement’ for the purposes of defamation. As defined by S1(1) of the Defamation Act 2013, a defamatory statement is false statement published to a third party that has caused or is likely to cause serious harm to the reputation of the claimant. To cause serious harm to reputation, the Deepfake would have to lower the individual in the estimation of the reasonable viewer, causing the individual to be shunned or avoided or exposes the individual to hatred and ridicule or contempt.
The Reasonable Viewer of a Deepfake
A question arises as to whether a reasonable viewer could spot a Deepfake, and even if they knew something was AI generated, could the Deepfake still be considered defamatory?
The “quality” of output of such generative AI models are continually improving at an exponential rate. It is getting harder and harder to spot an AI generate fake these days.
In the eyes of the law, the objectively reasonable reader is not expected to verify the truth of statements but is presumed to interpret them based on their natural and ordinary meaning.
Further, the ordinary reasonable individual is presumed to be someone who is not unduly suspicious, is capable of distinguishing between suspicion and guilt, and does not assume the worst possible meaning of a statement. This principle was highlighted by the Supreme Court in Lachaux v Independent Print Ltd and another [2019] All ER (D) 42 (Jun).
Consequently, unless it was obviously fake, the reasonable reader is entitled to take a Deepfake at face value. Arguably, With the ability to falsely portray any individual as doing anything,
there is no doubt that a Deepfake could cause serious harm to reputation.
The Dangers of Deepfakes – Privacy and Data Protection issues;
Deepfakes do not only threaten reputations, they may also violate individual’s privacy by using their voices, photos or even identity – almost always without consent. Accordingly, publishing Deepfakes may give rise to claims in the tort of misuse of private information.
Importantly, the courts have recognised that this tort can extend to false-private information—where fabricated content still intrudes upon an individual’s reasonable expectation of privacy. This is a well-established principle, going back to Campbell v MGN Ltd [2004] UKHL 22, where the House of Lords held that a person’s privacy can be infringed by information that is partly or wholly untrue, provided the subject has a reasonable expectation of privacy and the publication is not justified in the public interest. Deepfakes depicting private, intimate, or otherwise sensitive scenarios—regardless of their falsity—can meet this threshold. Accordingly, even fabricated content can give rise to legal remedies where it unjustifiably interferes with an individual’s private life.
The creation and dissemination of deepfakes involving identifiable individuals is likely to infringe the UK General Data Protection Regulation (UK GDPR), particularly where personal data is used without a lawful basis such as consent. Personal data includes any information from which a living individual can be identified, and this extends to images and video likenesses. The Information Commissioner’s Office (ICO) has taken the position that facial imagery constitutes biometric data, which is classed as “special category data” requiring enhanced protections. Where a deepfake is created without the subject’s consent, and especially if it is distributed publicly, there may be grounds for regulatory complaint or injunctive relief under data protection law (R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058).
The training of generative AI systems, including those used to produce deepfakes, often relies on existing imagery and audiovisual content. Where such materials are used without permission or a proper licence, this may amount to copyright infringement. Copyright in photographs typically resides with the photographer or the commissioning party, not the subject of the image. The unauthorised use of protected works in training datasets could constitute an infringement under the Copyright, Designs and Patents Act 1988.
What to do if you’ve been Deepfaked
If you ever find yourself victim of deepfaking, the first port of call is to preserve as much evidence as you can. Download or record any screenshots, videos and photos, taking note of who uploaded the unlawful content.
Secondly, you can always report such a video to the platform upon which it was uploaded. The Online Safety Act 2023 (OSA), a long-awaited piece of legislation in England and Wales, came into force on 31 January 2024. Under the OSA, offences have been introduced to tackle intimate image abuse and ‘revenge porn’, meaning that the sharing (or threatening to share) intimate deepfaked images or videos of someone without their consent – even if created or disseminated without the intention of causing distress – is now a criminal offence. The platform operators of social media websites such as Instagram, Tik Tok or Facebook have report functions specifically for this purpose.
Lastly, as described above, there are a number of potential civil legal actions you can take. We are seeing a sharp uprise in the number of defamation, misuse of private of private information and data protection cases coming before the UK courts arising out of Deepfakes. Taylor Hampton’s solicitors are specialists in dealing with issues arising from Deepfakes, and can be contacted at @[email protected] +44 20 7427 5970