Lily's Death Was Not an Accident

A nine-year-old girl in Texas is dead. Her name has been reported by her parents, who have gone public with their grief and their fury because they believe — correctly — that silence is what the platforms count on. Their daughter found the blackout challenge on social media: a trend that instructs children to cut off blood flow to their brain until they lose consciousness. She tried it. She died.

The blackout challenge is not new. It has appeared on TikTok and other platforms repeatedly since at least 2021. A 2022 lawsuit documented that at least fifteen children died from the challenge after TikTok's algorithm served it to them. The platform was aware of the challenge. The algorithm promoted it anyway — because engagement is engagement, and the recommendation engine doesn't distinguish between a teenager watching a cooking tutorial and a child watching instructions for self-asphyxiation.

Lily's parents are ripping the algorithms in public. They're right to. But ripping them isn't enough.

Section 230 Is Not a Sacred Text

The platforms hide behind Section 230 of the Communications Decency Act, which provides broad immunity from liability for third-party content. The original logic — that you shouldn't hold a platform liable for what users post, since the platform can't review everything in real time — made sense in 1996, when the internet was a collection of message boards and the idea of an algorithmic recommendation engine that actively promotes content to specific users didn't exist.

That logic does not apply to what TikTok, YouTube, and Instagram actually do. These platforms don't passively host content. They actively curate it, rank it, and deliver it to specific users based on behavioral profiles built from hundreds of millions of data points. When TikTok's algorithm surfaces a blackout challenge video to a nine-year-old — not because she searched for it, but because the system calculated it would generate engagement — the platform is not a neutral conduit. It is a publisher making an editorial decision. And publishers bear responsibility for the decisions they make.

I've been a civil liberties advocate for years, and I approach regulatory questions with genuine skepticism about government power. Section 230 reform is the one area where I have consistently argued that the tech industry's libertarian framing of the issue is self-serving rather than principled. The free speech argument for platform immunity doesn't apply to algorithmic promotion. It applies to hosting. These are different things, and conflating them is how children die while platforms collect ad revenue.

What Reform Actually Looks Like

The parents of children killed by social media challenges are not asking for censorship. They are asking for a legal framework that holds platforms accountable for the consequences of their own algorithmic decisions — specifically when those algorithms target minors with content that has documented lethality.

That's a narrow, defensible, and constitutional ask. A platform should face civil liability when its recommendation engine serves documented dangerous challenges to a user it knows or should know is a minor. That standard doesn't require the government to review content. It requires the platform to bear the cost of the harms its systems produce. Right now, that cost falls entirely on families like Lily's.

Congress has held approximately forty hearings on social media's impact on children since 2021. They have passed roughly zero meaningful legislation. The platforms spend heavily on lobbying, and the lobbying works. Meanwhile, the recommendation engine runs, the challenges spread, and the body count accumulates.

Lily's parents said "too many kids lost." They're right. The question is how many more have to die before the law catches up with the technology that's killing them. Based on Washington's recent track record, the answer is: more than it should be.