is a senior policy reporter at The Verge, covering the intersection of Silicon Valley and Capitol Hill. She spent 5 years covering tech policy at CNBC, writing about antitrust, privacy, and content moderation reform.
Tech critics’ least favorite law is under siege again, this time with a focus on its recommendation algorithms.
On Wednesday Sens. John Curtis (R-UT) and Mark Kelly (D-AZ) introduced the Algorithm Accountability Act, which amends Section 230 of the Communications Decency Act to make platforms responsible for preventing their recommendation systems from causing certain foreseeable harms. Section 230 is the law that shields online platforms — including social media sites, digital forums, blogs with comment sections, and their users — from being held liable for other people’s unlawful posts, or for engaging in good faith content moderation. But the Algorithm Accountability Act would require commercial social media platforms to “exercise reasonable care in the design, training, testing, deployment, operation, and maintenance of a recommendation-based algorithm” to “prevent bodily injury or death.” If a platform should have reasonably been able to predict its content recommendations would result in physical harm, Section 230 would no longer offer protection for surfacing those recommendations.
Under the Algorithm Accountability Act, victims who suffered bodily harm, or their representatives, would be able to sue tech platforms for damages
This approach, known as a duty of care, is similar to the Kids Online Safety Act (KOSA), an embattled bill with wide support in the Senate that’s been stalled in the House amid tech lobbying and speech concerns. Under the Algorithm Accountability Act, victims who suffered bodily harm, or their representatives, would be able to sue tech platforms for damages if they believe they violated the duty of care. But it applies to only a subset of web services: specifically, for-profit social media platforms with over a million registered users.
The bill’s sponsors insist it would not infringe on First Amendment rights, getting ahead of a regular critique of Section 230 reforms. Like KOSA, the new bill says it would not prevent platforms from serving users information they search for directly. It also wouldn’t restrict feeds served in chronological or reverse-chronological order, and it would be prohibited to enforce the legislation based on users’ viewpoints.
Curtis has blamed Section 230 for enabling the toxic social media environment he believes contributed to the September slaying of conservative activist Charlie Kirk by a gunman in his home state of Utah. In a recent Wall Street Journal op-ed, he said that “online platforms likely played a major role in radicalizing Kirk’s alleged killer,” a phenomenon “driven not by ideology alone but also by algorithms—code written to keep us engaged and enraged.” At a CNN town hall at the university where Kirk was killed, Curtis and Kelly, whose wife Gabby Giffords is the survivor of an attempted assassination, previewed their new bill alongside a message calling for “tempering of political tensions from both sides of the aisle.”
Recommendation algorithms were a core issue in a major lawsuit against YouTube, Meta, and other platforms earlier this year, when a gun safety group alleged that they bore responsibility for radicalizing a racist mass shooter by surfacing hate speech in recommendation algorithms. Hate speech is legally protected, and a court threw out the case citing both Section 230 and First Amendment concerns. But the new law could shift the balance of power on a whole range of suits against tech companies for everything from drug use to self-harm. Even in cases where speech is ultimately found legal, losing Section 230 protection could tangle platforms in lengthy legal proceedings over challenges to their hosting or moderation of user posts.
But groups that have opposed KOSA and prior attempts to reform Section 230 like the Electronic Frontier Foundation (EFF) have warned that even with such assurances, platforms would be incentivized to simply remove or not surface information that might be construed as a violation, potentially even sweeping in resources meant to prevent the same harmful behavior lawmakers want to mitigate.