Social Media Trial Raises Key Product Safety Questions

By Gary Angiuli (March 3, 2026)

Read Article in Law.com

For years, lawsuits accusing social media companies of harming children struggled to gain traction in court.

Plaintiffs encountered familiar obstacles: proving causation in complex mental health cases, overcoming constitutional defenses rooted in free-speech doctrine and persuading judges that digital platforms could be evaluated using traditional product liability principles.

A trial now underway in Los Angeles County Superior court in the litigation titled Social Media Cases is different, because those barriers have given way to a full evidentiary presentation before a jury. For the first time, ordinary citizens are being asked to consider whether the design of widely used social media platforms foreseeably endangered young users, and whether those design choices crossed legal lines.

The case centers on Meta Platforms Inc.’s Instagram and Facebook and Google LLC’s YouTube. TikTok Inc. and Snap Inc. were also named, but reached confidential settlements shortly before trial, leaving the remaining defendants to face what is expected to be a weekslong courtroom examination.

More than a thousand related cases filed by families, school districts and state attorneys general are moving through courts around the country. As a result, the arguments explored in Los Angeles are likely to resonate well beyond California.

The plaintiffs allege that features such as infinite scrolling feeds, autoplay video, algorithmic recommendations and constant notifications were engineered to maximize engagement among adolescents, even as concerns mounted about potential associations between heavy use and depression, eating disorders, self-harm and suicidal ideation.

From a tort law standpoint, that theory is not entirely new, even if the technology is. It mirrors the structure of classic design defect and failure-to-warn claims, where manufacturers are accused of recognizing heightened risks, selecting designs that intensified those risks and declining to adopt safer alternatives.

The plaintiffs are expected to focus on whether engagement-driven features were the predictable outcome of internal testing, data analysis and business decisions aimed at increasing time spent on the platforms. Internal research documents, communications and testimony from current and former executives may play a central role in that presentation.

The companies, in turn, have outlined a layered defense reflecting how legally complex these cases have become. They contend that there is no universally recognized medical diagnosis of social media addiction, and that existing scientific literature does not establish a direct causal relationship between platform use and specific mental health disorders.

And they assert that they have implemented numerous safeguards for minors, including parental-control dashboards, time-management tools, restrictions on teen messaging and content moderation systems.

They also invoke the First Amendment, maintaining that decisions about what content appears in users’ feeds involve editorial judgment protected by constitutional principles. Courts, including the U.S. Supreme Court, have acknowledged that platform moderation and ranking decisions can implicate free-speech concerns, and that framework will likely inform much of the legal analysis.

At its core, the case raises a difficult question: Can engagement-maximizing design be treated as actionable conduct, or is it inseparable from protected expression? That distinction may shape the trajectory of this litigation.

One view characterizes algorithms, notifications and endless feeds as functional design mechanisms that influence user behavior in measurable ways, similar to physical features of a consumer product. Another perspective sees those same systems as inherently tied to content curation, and therefore entitled to constitutional protection.

How courts and juries navigate that boundary could have implications extending beyond the social media industry.

Foreseeability is also central to the dispute. The plaintiffs point to internal research suggesting heightened risks for adolescent users, and question whether companies adequately adjusted their design choices in response. The defense has indicated that such research must be viewed in context and alongside subsequent safety initiatives.

For practitioners, this is a familiar battleground. Evidence suggesting awareness of potential risks, coupled with questions about corporate response, often becomes a focal point in product liability litigation.

Causation may ultimately be the most complex element. Adolescence is shaped by an intricate web of influences, including family dynamics, school environments, peer relationships and biological factors. The defense emphasizes that mental health challenges cannot reasonably be attributed to any single application.

The plaintiffs respond that tort law does not require a product to be the sole cause of harm, only a substantial contributing factor, and that expert testimony may assist jurors in evaluating how prolonged exposure to certain digital systems interacts with developmental vulnerabilities. Competing expert opinions are likely to play a significant role.

The remedies sought underscore what is at stake. Beyond monetary damages, the plaintiffs seek injunctive relief that could require design modifications, particularly for minors.

For corporate defendants, mandated structural changes may carry consequences far beyond any damages award, as they could affect engagement models and revenue structures. If courts begin imposing design-related requirements, the effects could ripple across the broader technology sector.

For families following these proceedings, much of the legal debate may seem technical. Yet the outcome could influence age verification systems, default time limits, recommendation practices for teen users and the visibility of parental control tools.

Even a defense verdict would not end scrutiny, given the number of pending cases and the involvement of state attorneys general.

More broadly, the Los Angeles trial reflects a legal system confronting the architecture of digital environments, rather than isolated pieces of content. Courts are being asked to examine how platform structures operate, and whether those structures create unreasonable risks for young users.

This trial will not resolve every debate about technology and youth. It may not settle the broader cultural conversation at all. But it does represent an important step in clarifying whether engagement-driven digital design is largely beyond the reach of tort law, or subject to the same scrutiny applied to other products affecting health and safety.

For parents, policymakers and technology companies alike, that makes the moment significant. The legal principles tested here could shape how children experience the online world in the years ahead.


Gary C. Angiuli is a founder and the managing partner at Angiuli & Gentile LLP, as well as the supervising partner of the firm’s business, franchise and real estate practice group.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.