In the ongoing debate over social media’s influence on society, a new legislative proposal aims to address the role of algorithms in amplifying harmful content. As platforms face criticism for their role in spreading misinformation and fostering extremist views, lawmakers are once again seeking solutions.
The focus of the critique lies on algorithms designed to engage users by promoting similar content to what they’ve previously interacted with. This phenomenon, often referred to as the “rabbit hole” effect, is linked to the spread of conspiracy theories and has been associated with various mental health issues.
Federal protections currently shield social media companies from legal action regarding user-generated content. These protections, coupled with First Amendment rights, present challenges in regulating online speech through legislative means.
A bipartisan effort led by Senators Mark Kelly and John Curtis is putting forth the Algorithm Accountability Act. This bill proposes to strip legal immunity from social media platforms if their algorithms promote harmful content.
The bill seeks to amend Section 230 of the Communications Decency Act, a pivotal law that has long shielded tech companies from liabilities related to user content. Kelly emphasized the need for accountability, stating, “Too many families have been hurt by social media algorithms designed with one goal: make money by getting people hooked.”
On the House side, Reps. Mike Kennedy and April McClain Delaney introduced a similar bill, signaling a coordinated legislative push.
Unlike broader reform attempts, this proposal specifically targets the algorithms behind content recommendations, aiming to ensure platforms exercise “reasonable care” in how they present content to users.
Yphtach Lelkes, a communications and political science professor, noted, “Algorithms make us see the world as more aggressive and more conflictual than it actually is.” He acknowledged the good intentions behind the bill but cautioned about the challenge of balancing public good with the platforms’ business models.
The discussion extends to concerns about how alterations to algorithms might impact content neutrality, with Zach Lilly of Net Choice expressing apprehension over government influence on content curation. He warned about potential First Amendment issues, saying, “That’s where you start to ask those First Amendment questions.”
Moreover, the economic burden of compliance could disproportionately affect smaller tech companies, Lilly added.
Both Curtis and Kelly have personal motivations driving their advocacy. Curtis reacted strongly to political violence in Utah, and Kelly continues to be an active voice following his wife Gabby Giffords’ shooting in 2011.
Lelkes further explained that while algorithms contribute to users falling into extreme content rabbit holes, they aren’t the sole cause. Users may already have predispositions before encountering such content.
The growing sophistication of AI in optimizing content delivery has raised additional concerns. Caitriona Fitzgerald from the Electronic Privacy Information Center underscored the lack of transparency and accountability in AI-driven decisions, stating, “We are playing catch-up.”
This article was initially published by Cronkite News and is republished here under a Creative Commons Attribution-NoDerivatives 4.0 International License.
—
Read More Arizona News








