WASHINGTON — Amid increased scrutiny of the actions of big tech, the U.S. Supreme Court heard oral arguments in the first of two cases on Tuesday that could alter how the law protects internet companies.
The case, Gonzalez v. Google LLC, involves the family of an American killed in an ISIS terrorist attack in Paris in 2015. The family is suing Google for their actions as the parent company of YouTube, which they say knowingly spread and recommended ISIS recruitment videos.
At the heart of the case is Section 230, a part of the Telecommunications Act of 1996. The provision protects internet services from being liable for third-party content posted on their platforms. Supporters of the provision reason it’s to allow for as much content as possible to be freely uploaded to the internet, and that companies can’t regulate all the information third parties put on their platforms.
But in recent years, Section 230 has come under fire from critics. They say it protects big tech too much, and companies use it to avoid repercussions for harm done from their platforms or to disguise politically partisan activity.
Gonzalez is the first case that the Supreme Court has heard on this topic, said Anupam Chander, Professor at Georgetown Law.
“This is unprecedented,” he said. “The circuit courts have reached the same conclusions, so there is a uniform interpretation of Section 230 as it currently stands.”
That interpretation has been very broad, Chander added. He said it’s been interpreted to extend beyond just simply publication of third-party content, but selection and curation of that material as well. Lower court decisions in this case have upheld this precedent, siding with Google.
But in their brief, the Gonzalez legal team is challenging that interpretation. They argue in its literal form Section 230 says nothing about the recommendation of third-party material, so Google is not protected.
But justices took issue with the expansive nature of both sides’ positions at oral arguments.
Justice Elena Kagan asked Gonzalez Counsel Eric Schnapper if excluding algorithms from Section 230 would essentially make the statute useless, as algorithms run so much of the modern internet. Schnapper responded that what matters is how the algorithm is used and what harm arises from it.
The justices also focused on how to determine when an algorithm is aiding and abetting harm, something that Justice Sonia Sotomayor stated is necessary for there to be viable defamation claims. “There has to be some intent,” she said.
Schnapper struggled to respond to that determination but reiterated his point that recommendations are not protected under Section 230.
“They’re going to give me a catalog,” he said about recommendations. “They created that content.”
Meanwhile, the justices questioned the defense counsel Lisa Blatt about how far the statute goes. Justice Clarence Thomas asked Blatt if endorsements of content by companies are protected under Section 230, to which she responded that is one’s own speech and therefore would likely fail to be protected.
However, she did respond to questioning from other justices that biased algorithms are protected under her interpretation of Section 230. But justices pushed her further, asking whether algorithms that inhibit white people from seeing news about racial justice would be protected from litigation, to which she gave a more uncertain answer.
Other tech companies have come to the defense of Google. Microsoft in an amicus brief said “accepting (Gonzalez’s) arguments would wreak havoc on the internet as we know it,” while Meta, the parent company of Facebook, threatened altering Section 230 would lead to the removal of more content from their site.
“The floodgates of litigation will open if they are to be held liable for their failures in their recommendations services,” Chander said, explaining tech companies’ threats. “Their economically self-interested response will be to remove controversial content that might expose them to liability.”
Those concerns were raised during the hearing. The plaintiffs said they thought little litigation would arise if Section 230 were to be weakened, but Blatt gave an explicit warning.
“Congress made that choice to stop lawsuits from stifling the internet in its infancy,” she said about the creation of Section 230. “The internet would have never got off the ground.”
The potentially more heavy-handed moderation will be an economic calculation, Chander said, as companies weigh legal costs versus increased scrutiny of their content regulation decisions.
“It costs less to take down speech,” he said. “It’s very costly to leave it up.”