Important Legal Decisions Against Meta and Google
By Fanny Gauthier, Associate Lawyer, Milestone Avocats (*)
In late March 2026, two American juries made historic rulings against Meta and Google, acknowledging the responsibility of digital platforms not only for the hosted content, but also for the harmful design of their services. This legal shift is expected to fuel the European and French debate.
Social network giants faced a particularly challenging month in March: two unfavorable jury verdicts in two days. On March 24, in New Mexico, a jury fined Meta $375 million for misleading the public about the safety of minors on its platforms (Facebook, Instagram, and WhatsApp), violating the state’s consumer protection law. The case was initiated in late 2023 by Attorney General Raúl Torrez, who accused Meta of exposing minors to predators and inappropriate content.
The following day, a Los Angeles jury became the first to recognize the responsibility of social media platforms in affecting a plaintiff’s mental health, due to the addictive design of their services. The jury found that Meta and YouTube, a Google subsidiary, had been negligent and failed in their duty to warn users about the addiction risks their platforms posed. As a result, Meta was ordered to pay $4.2 million for compensatory and punitive damages, and YouTube $1.8 million. Both companies announced their intention to appeal. The Kaley G.M. case was selected as a bellwether to test the resolution of thousands of similar complaints. TikTok and Snapchat, also implicated, had reached a confidential settlement prior to the trial. The verdict could have significant implications: if Meta and YouTube’s appeals fail, both companies could face a wave of serial convictions, holding them accountable for millions of users.
The determining factor in these decisions? The platforms were not penalized for user-generated content, but for the fundamental design of their systems – recommendation algorithms designed to maximize screen time, endless scrolling, autoplay, late-night notifications, and body-image altering filters. A major paradigm shift.
Bypassing Section 230 Shield
In the United States, platforms benefit from a powerful legal shield: Section 230 of the Communications Decency Act of 1996 protects them from liability for third-party content in principle. These actions circumvented this obstacle by targeting not the content, but the platforms’ designs themselves. Platforms are now treated not only as content hosts, but as manufacturers of products with inherent design flaws.
Platforms as Addictive as Tobacco?
These cases are reminiscent of the tobacco litigation that resulted in the 1998 Master Settlement Agreement of $206 billion, after internal documents revealed that cigarette companies had long known about the addictive effects of nicotine and deliberately concealed them. The parallel is striking: evidence presented in the California trial showed that Meta had been aware for years of the negative effects of Instagram on the mental health of teenage girls, yet did not change its design choices. However, the comparison has its limitations: social media addiction involves complex psychosocial mechanisms, and platforms also offer real benefits in terms of information and social connection.
Outlook in Europe and France
Europe has a more user-protective regulatory framework. The Digital Services Act (DSA) prohibits dark patterns – interfaces designed to manipulate or deceive users – and requires major platforms to conduct an annual assessment of systemic risks associated with their services. In 2024, the European Commission formally initiated proceedings against Meta and TikTok on these issues. In cases of non-compliance, fines can reach up to 6% of global revenue. In France, the 2024 SREN law and a proposed law to prohibit social media access for children under 15 demonstrate a real political will to address these issues.
In civil litigation, platforms enjoy the protection of hosting status, except when they are aware of blatantly illegal content and fail to promptly remove it. The EU directive on liability for defective products, expected to be transposed by the end of 2026, could provide an additional avenue.
The movement is already underway in France. On November 4, 2024, seven French families united under the Algos Victima collective sued TikTok in the Créteil judicial court, accusing it of moderation failures and the addictive nature of its algorithm, which gradually led young users to content glorifying self-harm and suicide. In November 2025, the Paris prosecutor’s office opened a preliminary investigation for incitement to suicide and illicit data collection. On March 26, 2026, following the California verdict, the Ministry of National Education reported to the prosecutor after testing TikTok’s algorithm through a fictitious minor account, which had gradually directed the profile to illicit content.
The protections associated with hosting status seem to be wavering on both sides of the Atlantic. When a company designs a product to maximize user engagement, it can no longer ignore the risks this choice poses to their health, for fear of being held liable.



