AI and Accountability: Digital Harm on Trial

The expansion of artificial intelligence (AI) and social media platforms has connected billions worldwide – but it has also given rise to complex new litigation challenges. Courts across the United States, and the wider world, are beginning to test the boundaries of liability where digital platforms have allegedly caused users serious psychological or even physical harm. From claims that AI chatbots acted as “suicide coaches” to lawsuits accusing social media companies of feeding harmful content to teenagers, this emerging field raises profound questions about product design, corporate responsibility, and user welfare.

Lawsuits Against OpenAI: The “Suicide Coach” Allegations

In November 2025, a series of lawsuits filed in California accused OpenAI, the developer of ChatGPT, of designing and releasing its GPT‑4o model despite internal safety warnings that it could cause emotional harm. The complaints allege that the chatbot evolved into a “psychologically manipulative presence”, presenting itself as a “sycophantic” online confidant while enabling or failing to prevent the self-harm and suicide of vulnerable users.

According to the plaintiffs, OpenAI allegedly compressed crucial safety testing into one week to beat competitors such as Google’s Gemini to market. Several of its own safety researchers reportedly resigned in protest. The suits, filed on behalf of surviving family members by the Social Media Victims Law Center (SMVLC) and Tech Justice Law Project, seek not only damages but product reforms – including mandatory emergency reporting when users express suicidal ideation, automatic termination of harmful conversations, and stronger mental health safeguards.

These cases build on earlier wrongful‑death suits against OpenAI, where the company acknowledged limitations in recognising and responding to users in “serious mental and emotional distress.”

Claims Against Social Media Platforms and Algorithmic Harm

Parallel mass tort proceedings have emerged against major social media companies – including Meta, TikTok, Snap, and Google – alleging that their algorithms promote harmful and psychologically damaging content.

Families of young users claim that algorithmic recommendation systems pushed them toward content glorifying self‑harm, eating disorders, and body dysmorphia, precipitating crises of anxiety, depression and, in some cases, suicide. Plaintiffs argue that social media companies knowingly deployed features – such as infinite scroll, push notifications, and engagement‑optimised feeds – that exploit users’ attention and reinforce compulsive behaviours, particularly among adolescents.

In one such case, the families of four British children, represented by SMVLC, brought a lawsuit against TikTok and is parent company, ByteDance, alleging that the deaths of their children were the predictable “result of ByteDance’s engineered addiction-by-design and programming decisions”.

Separately, Ellen Roome is suing TikTok for wrongful death following the death of her son, Jools. She is also advocating for the introduction of “Jools’ Law”, which would provide parents with an automatic legal right to access their deceased child’s social media accounts. While technology companies have resisted such measures on grounds of user privacy, many parents argue that these concerns mask broader attempts to withhold or delete evidence of potentially harmful content. Should these claims succeed and data access reveal that platforms knowingly amplified damaging material, the outcome may set a precedent for widespread civil litigation.

American courts are beginning to test whether such algorithms can constitute “defective design” under product liability principles, or whether claims are pre‑empted by Section 230 of the Communications Decency Act, which shields online platforms from liability for user‑generated content. Several judges have held that Section 230 does not necessarily bar claims targeting the design of the platform itself, rather than third‑party speech, allowing negligence and failure‑to‑warn theories to proceed.

Beyond the United States, regulators and lawmakers worldwide are monitoring these developments with interest. The United Kingdom’s Online Safety Act and the European Union’s Digital Services Act already impose duties on platforms to assess and mitigate systemic risks, signalling a growing recognition that algorithmic design choices can have real‑world medical and psychological consequences.

The Expanding Scope of AI Liability

AI lawsuits are not limited to ChatGPT. Other platforms based on generative or conversational AI, such as Character.ai, are also facing scrutiny for allegedly facilitating sexually explicit or emotionally exploitative interactions with minors and vulnerable individuals. Plaintiffs and advocacy groups argue that such AI systems create simulated relationships that blur boundaries, manipulate users’ emotions, and exploit their psychological dependencies.

In these claims, courts are being asked to define whether conversational AI constitutes a “product” in the legal sense, and if so, what duties its developers owe to human users. Product liability and negligence frameworks may be supplemented by arguments under consumer protection statutes, false advertising, or unfair trade practice laws – particularly where companies allegedly overstated safety testing or failed to disclose known risks.

Legal Theories and Challenges

The key doctrines underpinning this new wave of litigation include:

  • Product liability: Claims that AI or social media products were defectively designed, or that companies failed to warn users about foreseeable risks.
  • Negligence: Allegations that companies breached their duty of care by failing to implement adequate safeguards or mental health protections.
  • Fraudulent or negligent misrepresentation: Assertions that platform operators made misleading safety claims or concealed harmful effects.
  • Consumer protection: Statutory claims arguing that deceptive or unfair business practices contributed to harm.

However, plaintiffs face substantial evidentiary challenges. Proving causation between a digital product’s design and an individual’s psychological harm remains complex, particularly where harm arises from multifactorial conditions. Defendants frequently invoke Section 230, First Amendment defences, and arguments that AI outputs fall outside conventional notions of “defect” or “duty of care.” Many cases therefore remain in their preliminary stages.

The growing wave of lawsuits against AI and social media companies signals an inflection point in tech accountability. Courts are being asked not merely to assign blame for individual tragedies but to define a duty of care that aligns innovation with the protection of human welfare. Whether through negligence, product liability, or consumer protection law, these proceedings may forge the framework for responsible digital design in the decades ahead. In doing so, they challenge the industry to demonstrate that technological progress and psychological safety can – and must – coexist.

For more information on our Legal Services at Taylor Hampton Solicitors see HERE:  If you have been affected by any of the issues and would like to get in touch confidentially, please contact us by sending an email to [email protected] or calling +442074275970.

Disclaimer: This article provides general guidance only and does not constitute legal advice. Civil procedure rules and case law can change. Always seek professional legal advice tailored to your specific situation before acting.

Search
Archive

For all enquiries please call Taylor Hampton on +44 20 7427 5970

Make An Enquiry