Should Social Media Operators be Responsible for Defamatory Content Appearing On Their Platform?

In the digital age, the prevalence of social media usage is increasingly prominent, allowing
users to share their opinions instantly and globally. As a result, the world has seen an
explosion of defamatory content uploaded across social media, which far too often goes
unchecked and unresolved.
The ability to publish online content anonymously presents a significant hurdle which must
be overcome when attempting to manage harmful and defamatory content. In light of the
potentially catastrophic damage that defamatory publications can cause, the inability to easily
identify the individual responsible often causes costly delays in resolving brining legal action.
This raises the question as to who else can be, or should be, held accountable for such
statements. Whether a social media operators itself should bear some legal responsibility for
defamatory content published on their website.

Under the Defamation Act 2013, a defamatory statement is false statement published to a
third party that has caused or is likely to cause serious harm to the reputation of the subject of
the statement. In order to bring a successful libel claim, the statement must be published to at
least one third party, refer to an identifiable legal person, and the claimant must show that
serious harm has, or is likely to be caused to their reputation as a result (Lachaux v
Independent Print Ltd and another [2019] All ER (D) 42 (Jun)).

Generally, the starting point is bring an action against the ‘originator’ of the statement – the
individual who wrote and shared the defamatory material online. At common law, liability is
extended to any legal person who participated in, secured or authorised the publication, as
recently restated in Hourani v Thomson [2018] EWHC 432 (QB). The author, editor or
publisher are responsible for the publication of a statement, all of whom are jointly and
severally liable for the whole damage suffered (McEvoy v Michael 2014] EWHC 701 (QB)).
It should therefore follow that social media platforms (or “operators”) are ‘publishers’ for the
purposes of liability, however, such operators are expressly protected under the law.
Defamation Act 2015 s5: The Platforms Defence
Section 5 of the Defamation Act 2013 states that ‘it is a defence for the operator to show that
it was not the operator who posted the statement on the website’. Therefore, social media
platforms who can clearly show it was one of their users who uploaded the statement are
expressly protected. Defamation Act 2013 s5 therefore provides a shield for website
operators, meaning if the platform didn’t create the defamatory content, then it won’t be
liable.
This defence is not without its exceptions though. Under the Act, the website operator can
still be held liable if:
1. The operator had a notice of complaint relating to the statement from the claimant;
2. The operator then failed to respond to the notice in accordance with the law; and
3. The claimant cannot identify the individual who posted the statement.
Therefore, you can pursue a platform operator, but only if the platform operator fails to take
the appropriate actions upon being informed about the defamatory material published on it’s
website, and there is insufficient information available regarding the identity of the original
publisher.

(Defamation Act 2013 s5(4)).
Sufficient information regarding the original publisher often requires a name and an address,
for the purpose of serving the claim on them. However, in exceptional circumstances, the
court may allow service of the claim through the platform itself (such as through Instagram
DMs, Facebook Messenger or by email), superseding the need for an address. It would
therefore appear that it is particularly difficult to defeat the Section 5 Defence.

Identifying the Individual – Norwich Pharmacal Orders and Section 5.
A potential claimant always has the option seeking to obtain a Norwich Pharmacal order
(NPO) from the court.
An NPO, named after the seminal case from which it was established, is a court order which
can require a third party individual/organisation to disclose information they hold relating to
an anonymous wrongdoer, so that the applicant can find out who the wrongdoer is and bring
their claim against them (Norwich Pharmacal Co v Customs and Excise Comrs [1974] AC
133, [1972] 3 All ER 813, [1972] 3 WLR 870, [1972] RPC 743, 116 Sol Jo 823).

Here, an NPO can be used to force a social media platform to disclose the relevant info
relating to the identity behind an anonymous account. However, it should be noted that NPOs
are often costly to obtain, and there is no guarantee that the court would grant one.
The courts are yet to answer the question as to whether an individual must at least attempt to
obtain an NPO against a social media platform before being able to claim that they cannot
identify the individual who posted the statement for the purposes of defeating the Defamation
Act 2013 s5 defence.

Defamation Act 2013 s13: The Order to Remove
It is worth noting that, despite difficulties in brining libel actions against a social media
platform operator, the law still provides a route to removing defamatory material off their
website.
The Defamation Act 2013 provides Under Section 13 of the Defamation Act 2013 gives the
court the ability to, upon a successful libel claim by a claimant where the defamatory
statement was made on such a website, order that the operator removes the statement.

Should you be Able to Sue the Platform Operator: Comparison to the US and EU

In the United States, Section 230 of the Communications Decency Act states ‘no provider or
user of an interactive computer service shall be treated as the publisher or speaker of any
information content provider’. This means platforms like YouTube or Facebook cannot be
held accountable for content creator by an individual, even if they moderate it.
In the European Union, under the Digital Services Act, ‘providers of intermediary services
shall act diligently in dealing with illegal content… where they obtain actual knowledge or
awareness of such content’. This doesn’t make platforms automatically liable but does
impose clear ‘due diligence’ obligations in the effort to reduce this issue. The platforms must
carry out risk assessments for the spread of illegal content and systemic harms and provide
users with a reason for content removals or restrictions. They must also implement fast,
transparent notice-and-action mechanisms. Unlike the US, neutrality isn’t a defence, if the
EU platform knew, or should have known about the content and failed to act, they are liable.
However, with the ever rising prominence of social media, individuals have the ability to
write anonymously or repost statements indefinitely. Defamatory posts are usually popular
due to their often controversial and scandalous nature, which social media platforms often
boost the publicity and exposure of through their algorithms. This raises the question as to
whether or not these platforms are merely just intermediaries or play a purposeful and
intentional role as publisher.
It is also important to consider the number of statements and the speed at which people can
add and contribute to others. As each statement is actionable in their own right, challenging
each person and each statement as individual claims may not be feasible, and instead should
the platform then be held accountable?

Search
Archive

For all enquiries please call Taylor Hampton on +44 20 7427 5970