X

Supreme Court Hears Google, Twitter Cases That Could Shake Up the Internet

The highest court in the US heard oral arguments on Tuesday and Wednesday targeting Big Tech that could reshape online speech.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
8 min read
Logos for Facebook, Twitter and Google on a smartphone in front of a gavel, scales of justice and a book.

The US Supreme Court is weighing two cases about online speech.

Emin Sansar/Anadolu Agency/Getty Images

The future of online speech and the internet is in the hands of the US Supreme Court. 

On Tuesday and Wednesday, the Supreme Court heard oral arguments in two high-profile cases involving Google-owned YouTube, Twitter and Facebook that could reshape how people use the internet and what they can post online. Both cases stem from lawsuits brought by relatives of people killed in separate terrorist attacks, alleging that the social media companies are liable for the harmful content that appears on their platforms. 

At stake are questions about whether these online platforms should be held legally responsible for content created by their users but promoted by the companies' algorithms. Tech companies have successfully fought back against these types of lawsuits because of protections they receive under a 27-year-old federal law. 

But lawmakers on both sides of the aisle, including US President Joe Biden, have called for changes to what's known as Section 230 because of growing concerns that tech companies aren't doing enough to safeguard user safety. Tech companies say that removing this legal shield could hurt free expression because they could be subject to more lawsuits.

Eric Goldman, a professor at Santa Clara University School of Law, said tech platforms give people the ability to talk to others online. That could go away depending on what the Supreme Court decides. 

"If the Supreme Court says that's a risky option, then the Supreme Court isn't sticking it to Big Tech," said Goldman, who wrote a brief supporting Section 230 protections. "It's sticking it to all of us." Companies could limit who can post on their platforms or scrap user-generated content, he added.

Here's what you need to know about this high-stakes battle over online speech:

What is Section 230?

Section 230 is part of the 1996 Communications Decency Act, which shields platforms, including Google, Twitter and Meta-owned Facebook, from certain lawsuits over posts created by users. It also allows these platforms to take action against offensive content.

The provision states that no "interactive computer service" provider or its user should be treated as a publisher of third-party content.

The co-authors of Section 230 -- US Sen. Ron Wyden, an Oregon Democrat, and former Rep. Chris Cox, a California Republican -- told the Supreme Court in a brief that Congress created it "to protect Internet platforms' ability to publish and present user-generated content in real time, and to encourage them to screen and remove illegal or offensive content." Even back then, online services were facing lawsuits over user content. In 1995, for example, the New York Supreme Court ruled that Internet message-board platform Prodigy Services could be liable for publishing alleged defamatory content.

Section 230 doesn't apply to content that violates criminal, intellectual property, state, communications privacy and sex trafficking laws.

Why should I care?

Section 230 was designed to encourage free speech online. But a Supreme Court ruling on the matter could alter how you use the internet and what you can post online. If an online platform is worried about more lawsuits, it could change how it moderates content and potentially increase the scrutiny over what you say.

"Without Section 230's protections, many online intermediaries would intensively filter and censor user speech, while others may simply not host user content at all," the Electronic Frontier Foundation said in a blog post about the topic. 

What cases are the Supreme Court hearing?

The Supreme Court is examining two cases involving online speech: Gonzalez v. Google and Twitter v. Taamneh. 

Gonzalez v. Google, which was heard on Tuesday, centers on whether Section 230 protects online platforms including social networks from lawsuits when they recommend third-party content. The case stems from a lawsuit filed by the family of Nohemi Gonzalez, a 23-year-old American student who was killed in 2015 in terrorist attacks in Paris. The family alleged that Google-owned YouTube aided the ISIS terrorists because the video-sharing platform allowed them to post videos that incited violence and recruited supporters. The lawsuit also accuses YouTube of recommending ISIS videos to users. 

A district court and the US Court of Appeals for the Ninth Circuit ruled in Google's favor, dismissing Gonzalez's claims.

In Wednesday's Twitter v. Taamneh case, the Supreme Court examined whether people can sue online platforms for aiding and abetting an act of terrorism. The case involves the 2017 death of Nawras Alassaf, a Jordanian citizen who was fatally shot in a nightclub in Istanbul during a mass shooting. ISIS claimed responsibility for the attack. Relatives of Alassaf sued Twitter, Google and Facebook, alleging that the platforms were liable under the Anti-Terrorism Act for aiding and abetting terrorism because the companies didn't do enough to combat this harmful content. 

A district court dismissed the claims in the lawsuit, but the US Court of Appeals for the 9th Circuit reversed the decision. 

What happened during the hearings?

On Tuesday, the Supreme Court justices asked lawyers representing Google and the Gonzalez family a variety of questions for more than two and a half hours about Google's algorithm, YouTube's thumbnails, artificial intelligence and actions that users take, such as liking or sharing a post.

Justice Elena Kagan said everyone is trying their best to figure out how a "pre-algorithm statute" applies in a "post-algorithm world."

"Every time anybody looks at anything on the internet, there is an algorithm involved," she said.

Eric Schnapper, the lawyer representing the Gonzalez family, said they're trying to make a distinction in their arguments between "liability for what's in the content that's on their websites" and actions companies take to encourage users to look at certain content. 

At one point, Justice Samuel Alito told Schnapper that he was "confused" by the arguments the lawyer was making. He asked: If a user creates an ISIS video and it includes a preview image of the video, in what's known as a thumbnail, whether YouTube could be sued because it would be considered a publisher for displaying the thumbnail.

"It is acting as a publisher but of something that they helped to create because the thumbnail is a joint creation that involves materials from a third party and a URL from them and some other things," Schnapper replied. 

Justice Amy Coney Barrett asked if a user could be liable for retweeting or liking a tweet. 

"On your theory, I'm not protected by Section 230?" Barrett asked. "That's content you've created," Schnapper replied after the two went back and forth over the definition of a user under Section 230.

On Wednesday, the Supreme Court heard arguments about whether Twitter and other social media companies could be liable for aiding and abetting an act of terrorism.

Justice Clarence Thomas posed a question: If he had a friend who was a murderer or a burglar and he loaned him a gun without knowing what he would do with the weapon, would that be considered an act of aiding and abetting? 

"I think it wouldn't be," said Seth Waxman, Twitter's lawyer. Waxman said you would have to have a "general awareness" that you're assisting an illegal activity. If the justice helped open a gate for a neighbor who then steals his other neighbor's sheep, then he would know that he provided substantial assistance but he wouldn't be "culpable" within the meaning of aiding and abetting under the law. 

Justice Sonia Sotomayor said she thinks the "the center of the issue" is whether the platforms "knowingly" provided "substantial assistance" under the Anti-Terrorism Act. 

"Willful blindness is something that we have said can constitute knowledge," she said. 

How have tech companies responded?

Google's lawyer Lisa Blatt told the Supreme Court on Tuesday that if websites could be liable for recommending third-party content it "threatens today's internet."

"The internet would have never gotten off the ground if anybody could sue every time," she said about Section 230 protections.

In a post about the case before the hearing, Google said that users would be "left with a forced choice between overly curated mainstream sites or fringe sites flooded with objectionable content."

If the platform could get sued for content it recommends, consumers could have a tougher time finding content they want to view. The tech giant also says that removing Section 230 protections would make the internet less safe, hurt both big and small online platforms and cause websites to restrict more content or shut down some services because of the legal risks.

Other tech companies, including Reddit, Yelp, Microsoft and Meta, have also defended Section 230 protections in briefs filed to the court. 

"Exposing companies to liability for decisions to organize and filter content from among the vast array of content posted online would incentivize them to simply remove more content in ways Congress never intended," Jennifer Newstead, Meta's chief legal officer, said in a January blog post about the topic.

Reddit said in its brief that users could become more wary about volunteering to moderate content on its platform or recommending content through actions such as "upvoting" because of legal risks.

In Twitter v. Taamneh, Twitter said it didn't aid and abet an act of terrorism because the company didn't intend to help terrorists, had rules against posting terrorist content and wasn't connected to the terrorist attack in Turkey. Facebook and Google-owned YouTube backed Twitter in a brief, stating that the appeals court's ruling on the Anti-Terrorism Act is "incorrect" and could result in more lawsuits against any provider of goods or services such as an airline company, financial services provider and pharmaceutical business that terrorists abuse. 

Waxman told the Supreme Court on Wednesday that the tech platforms had no intent of aiding ISIS terrorism activity. "They maintained and regularly enforced policies prohibiting content that promotes terrorism activities," he said.

Twitter, which no longer has a communications department, didn't respond to a request for comment. 

What do US lawmakers think about this?

Democrats and Republicans, surprisingly, agree that reforms to Section 230 are needed. But their motives strongly contradict each other. 

Although the companies have repeatedly denied doing so, Republicans accuse Big Tech of suppressing conservative voices, with US House Judiciary Committee Chairman Jim Jordan last week issuing subpoenas to the CEOs of Google's parent company Alphabet, Amazon, Apple, Meta and Microsoft. 

Democrats argue that Section 230 prevents social media companies from being held accountable for failing to moderate hate speech, misinformation and other offensive content.

"We need Big Tech companies to take responsibility for the content they spread and the algorithms they use," Biden wrote in an op-ed published in The Wall Street Journal in January.

Justice Brett Kavanaugh asked if it was better to keep Section 230 the way it is and leave it up to Congress to change the law. The Supreme Court is being asked to make a "predictive judgment" when they don't know how "bad" the consequences could be, he added.

"I don't know how we can assess that in any meaningful way," he said.

What happens next?

The Supreme Court is expected to make a decision on the cases this year. The court is being asked to review other cases involving online speech. In January, it delayed saying whether it will hear cases about controversial laws passed in Texas and Florida that restrict how social media companies can moderate content.