Meta CEO Mark Zuckerberg exiting Los Angeles Superior Court in California
Kyle Grillot/Bloomberg via Getty Images
I sat down to write and decided to check my calendar on my phone first. A friend sent me a meme on Instagram, which I decided to view. Below the post, a series of algorithmically curated short videos appeared, covering topics from ravens in the Tower of London to Indonesian street food. I started watching one video, then another, and found myself endlessly scrolling through them. As the content turned more unsettling and political, I realized 45 minutes had slipped away unnoticed.
My day wasn’t ruined, but I felt drained and disheartened. I wondered how Instagram had managed to captivate me for so long with hundreds of videos and ads when I only intended to check my calendar. Why did it leave me feeling so down?
These questions are central to two California court cases involving thousands of individuals and groups against major social media firms such as Meta (which owns Facebook and Instagram), Google (which owns YouTube), Snap (which owns Snapchat), ByteDance (which owns TikTok), and Discord. Plaintiffs, including school districts and concerned parents, claim these platforms endanger children, inflicting severe psychological harm and even causing fatalities. Youths are allegedly lured into harmful content like violence, unattainable beauty standards, and dangerous stunt challenges. The core issue in these cases is whether these companies are accountable for the negative feelings their platforms prompt.
For over a decade, many US lawmakers have suggested that the answer is no. Rather than regulating companies, several US states have implemented laws affecting how children engage with social apps. Some require parental consent for minors to create accounts, while others attempt to curb bullying by removing “like” counts on posts. These laws mainly address the risks of content on social media, effectively absolving companies of responsibility due to Section 230 of the Communications Decency Act, which prevents companies from being liable for user-posted content.
Section 230 appeared sensible in the 1990s, a time free from concerns about doomscrolling, algorithmic influence, or harmful influencers promoting extreme measures like altering one’s jawline with hammers. Its practicality was evident, given platforms like YouTube, which hosts 20 million new videos daily, could not operate if held liable for each unlawful post.
In the backdrop of these legislative efforts is the US commitment to free speech, making it easy for companies like Meta or Google to contest laws that might limit online speech access, even for controversial content like extreme dieting methods. Many laws aimed at restricting minors’ social media use have been overturned by judges who see them as infringing on free speech. Consequently, social media companies have often used free speech protections to resist regulations.
However, the two ongoing California cases approach the issue differently by avoiding content and free speech debates. They contend that the very design of social media platforms is inherently “defective” and harmful, with features like endless scrolling, constant notifications, auto-playing videos, and algorithmic allure intentionally created by these companies. The lawsuits argue that these design elements render social media apps “addictive,” akin to “slot machines,” exploiting young people with AI-driven endless feeds to keep them engaged. The lawsuits aim to hold social media companies accountable for the detrimental effects their products have on vulnerable users.
This argument mirrors that of the US government’s 1990s lawsuits against tobacco companies, where it was successfully argued that companies knew their products were harmful but concealed this fact. As a result, tobacco firms settled with victims, added warning labels, and altered their marketing to avoid appealing to children.
Recent leaks from Meta suggest that the company was aware of its product’s addictive nature. A federal judge disclosed court documents in a case where a teenager became suicidal due to social media addiction. These documents included internal communications from Instagram, where a user experience specialist reportedly described Instagram as a “drug” and likened the company to “pushers.” This is part of a larger set of documents from Instagram and YouTube that lawyers claim depict companies knowingly creating defective products.
The ongoing trials have the potential to significantly change the landscape of social media. It remains to be seen if US law will eventually recognize what many have suspected for years: the issue lies not in the content, but in the companies’ actions in delivering it.
Need a listening ear? UK Samaritans: 116123 (samaritans.org); US Suicide & Crisis Lifeline: 988 (988lifeline.org). Visit bit.ly/SuicideHelplines for services in other countries.
Topics:

