WASHINGTON — As Section 230 marks its 30th anniversary, legal and technology experts warned Thursday that excluding generative artificial intelligence from the law’s protections could dramatically reshape online speech.

Enacted as part of the Communications Decency Act of 1996, Section 230 of the Communications Act of 1934 states that online platforms are not treated as the publisher or speaker of content created by users. This provision exempts platforms from liability and is widely credited with enabling the growth of social media, forums and user-generated content across the internet.

With its growing popularity and adoption, AI has added complexity to the discussion. AI chatbots, for instance, are trained with large databases and complex programming to distinguish what content is appropriate for use. The process includes “a lot of human, intentional decisions,” said Jess Miers, assistant professor at the University of Akron School of Law and a legal expert in Section 230.

The comment came during a panel at a day-long conference hosted by the Cato Institute in Washington. The event brought together legal scholars, policy experts and industry leaders to examine how a law written in the early days of the internet is being tested by AI.

Panelists focused on whether AI-generated outputs should be considered speech under existing legal frameworks, and what that means for Section 230 liability. Miers argued that a blanket rule denying Section 230 protection to AI systems could have consequences beyond chatbots, potentially affecting long-standing online practices such as ranking, sorting and editing third-party content.

Other panelists emphasized how liability uncertainty could reshape the market. Matt Reeder, head of legal at Bluesky, pointed to decentralized platforms as examples of how liability protections can empower users. Describing Bluesky’s structure, Reeder said it is “much more like a farmers market,” where users own their identity and content and can move freely if they disagree with platform decisions.

Ashkhen Kazaryan, senior legal fellow at The Future of Free Speech, highlighted how even the Supreme Court has gradually narrowed the scope of Section 230 through case law, focusing on how the statute applies to modern content algorithms and moderation. Panelists discussed the Supreme Court’s consideration of Anderson v. TikTok, a case that has yet to reach the Court but raises questions about whether platforms can be held liable for harms linked to algorithmic amplification of third-party content.

“Algorithms, and the Supreme Court said this, are just tools that platforms are using to solidify their editorial discretion,” Kazaryan said.

The conference also featured a virtual conversation with Sen. Ron Wyden, D-Ore., one of Section 230’s original authors. He said the law does not originally apply to generative AI, but sets general guidelines for internet users.

“It establishes very simple principles for a wide variety of internet services that take us into the next generation of technologies,” Wyden said.