Democrats and Republicans on Capitol Hill agree on at least one issue these days: The decades-old law that shields large social media companies like Facebook and Twitter from lawsuits over the content their users post on their platforms must be changed.
On Tuesday, Facebook's Mark Zuckerberg and Twitter's Jack Dorsey appeared before the Senate Judiciary Committee to discuss potential legislation that would limit protections for social media companies under Section 230 of the 1996 Communications Decency Act, which provides a shield to online publishers from liability for content generated by users. Several proposals have already been introduced.
Republicans railed against the companies and their CEOs, who appeared virtually, for the perceived notion that the Silicon Valley powerhouses are biased against conservative views and work to censor conservatives, like President Donald Trump, while giving liberal politicians a pass.
"We have to find a way when Twitter and Facebook make a decision about what's reliable and what's not, what to keep up and what to keep down, that there is transparency in the system," said Sen. Lindsey Graham, a South Carolina Republican who chairs the Judiciary Committee. "Section 230 has to be changed because we can't get there from here without change."
Democrats agree that reforms are needed, but they see the problem differently, arguing that the Section 230 shield prevents social media companies from doing more to moderate their platforms and take down or limit content, such as hate speech and disinformation about COVID-19 and the elections.
"Change is going to come," said Sen. Richard Blumenthal, a Democrat from Connecticut. "No question. And I plan to bring aggressive reform to 230."
But he rejected the notion that reforms to the law should be politicized.
"I am not, and nor should we be in this committee, interested in being a member of the speech police," he said.
The contentious hearing, which lasted more than four hours, was heavy on attacks and demands for explanations over specific incidents but thin on suggestions for solutions. Zuckerberg and Dorsey pledged to be on board with reforms that required more transparency. But they balked at deeper reforms that would make them more responsible for the content posted on their sites.
"I believe that we can build upon Section 230," Dorsey said. "I think we can make sure that we're earning people's trust by encouraging more transparency around content moderation."
But he also cautioned lawmakers not to go too far in their reforms. He said that without the law's protections Twitter would never have gotten off the ground 14 years ago, stating that the law's protections had created "so much goodness and innovation."
"What we're most concerned with is making sure that we continue to enable new companies to contribute to the internet and to contribute to conversation," he said.
Zuckerberg admitted that social media platforms "have responsibilities, and it may make sense for there to be liability for some of the content that is on the platform." But he also said that social media platforms are not news publishers and therefore they still require some protections under the law.
"I think it [social media] deserves and needs its own regulatory framework," he said.
All of this comes as Trump, two weeks after the election, still refuses to admit publicly that he has lost to former Vice President Joe Biden. Social media posts from the sitting president have falsely claimed the election was stolen. And Trump has continued to tweet and retweet items that contain disputed information, prompting Twitter to slap warning labels on those posts. Additionally, baseless claims of election fraud from a variety of sources have also appeared on Twitter, as well as YouTube and Facebook.
Previously, the Trump administration threatened regulation that would make social media companies responsible for labeling or taking down false information. In October, President Trump tweeted "REPEAL SECTION 230!!!" after Facebook and Twitter slowed the spread of a New York Post story that contained unverified claims concerning Biden's son, Hunter Biden.
The Republican-led Federal Communications Commission is writing new regulations for Section 230 that would penalize companies for censoring content. The agency's top lawyer explained in a blog post why he thinks the FCC has the legal authority to reinterpret the law.
Tech companies say Section 230 protections, which shield them from liability for their users' posts and also let them moderate harmful content without facing repercussions, allowed online platforms to flourish in the early days of the internet.
As the influence and size of companies like Google, Twitter and Facebook have grown, lawmakers have questioned whether more regulation is needed to rein in their power. Democrats are troubled by the rampant flow of hate speech and disinformation, including interference by foreign countries in the 2020 US presidential election. Republicans, led by Trump, allege their speech is being censored by social media sites. There's no evidence the allegation is true, and the companies strongly deny the claim.
Here's what you need to know about the government's potential role in regulating social media:
What is Section 230?
Section 230 is a provision of the 1996 Communications Decency Act. A number of tech industry observers say it's the most important law protecting free expression online.
The provision essentially protects companies that host user-created content from lawsuits over posts on their services. The law shields not only internet service providers, like AT&T, Comcast and Verizon, but also social media platforms, like Facebook, Twitter and Google.
Section 230 isn't blanket protection. There are exceptions for federal crimes or intellectual property claims. A company could still be held accountable if it knowingly allowed users to post illegal content.
The law provides social media companies with sweeping protections that let them choose what content they restrict, and how. This means social media platforms can't be sued for taking down content or leaving it up.
Why did lawmakers think this was a good idea?
By eliminating liability risk, Section 230 has allowed companies to experiment. Without it, Twitter and Facebook almost assuredly wouldn't exist, at least not as they do now. And it isn't just big companies that gain from the law. Nonprofits have benefited too.
"Without Section 230, we'd have no Wikipedia," said Ernesto Falcon, senior legislative counsel for the Electronic Frontier Foundation, referring to the volunteer-maintained online encyclopedia.
Many experts say the law has enabled the internet to develop into a medium that allows ideas and political discourse to flow freely. Section 230 allowed online communities to experiment with content moderation, Falcon said. Without these protections, companies might not bother with moderation, he says, which would likely lead to even more offensive, false or misleading content online.
OK. So what are the problems with Section 230?
Most of the problems around Section 230 involve which posts social networks allow to stand and which ones they remove. The rancor around those decisions has prompted some politicians to call for the provision to be repealed or altered.
Democrats are most concerned about getting big social media companies to take down hate speech, harassment, disinformation and terrorism-related content. Republicans allege social media companies censor conservative viewpoints.
Former Vice President Joe Biden, the presidential nominee for the Democrats, argued in January that social media companies don't deserve protection because they knowingly allow false information on their platforms.
In an interview with The New York Times editorial board, Biden called for Section 230 to be "immediately" revoked. "It is propagating falsehoods they know to be false," Biden said, "and we should be setting standards not unlike the Europeans are doing relative to privacy." (Biden was referring to the EU's , a sweeping privacy law.)
Meanwhile Republicans, like Sens. Josh Hawley of Missouri and Ted Cruz of Texas, as well as Rep. Paul Gosar of Arizona, have called for changes to the law. They allege that social media companies have been working to silence conservative voices. There's no evidence the allegation is true, and the companies deny it.
Didn't the Justice Department propose some changes to the law for Congress to look at?
Yes. The Justice Department offered draft legislation in September after reviewing the statute for a year. The department had put forward recommendations in June.
The draft focuses on two areas. The first includes a series of reforms to "promote transparency and open discourse and ensure that platforms are fairer to the public when removing lawful speech from their services." The DOJ contends the current implementation of Section 230 enables online platforms "to hide behind the immunity to censor lawful speech in bad faith."
The Justice Department proposes clarifying language in Section 230 and replacing vague terms to better guide platforms, users and the courts.
The draft also aims to incentivize social media platforms to crack down on illicit content online. The Justice Department said "platforms that purposely solicit and facilitate harmful criminal activity ... should not receive the benefit of this immunity. Nor should a platform receive blanket immunity for continuing to host known criminal content on its services, despite repeated pleas from victims to take action."
It also provides more clarity on civil enforcement for Section 230.
Didn't Trump issue an executive order about Section 230?
In May, Trump issued an executive order directing the FCC to establish regulations that clarify the parameters of the good faith effort that Section 230 requires online companies make when deciding whether to delete or modify content. At the heart of Trump's executive order is the claim that social media sites censor conservative viewpoints they disagree with.
Section 230 protects social media platforms from liability for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." This would include deleting posts or putting a label on a post noting that it may be false, even if the post would be protected by the First Amendment against government censorship.
Does the FCC have any authority to make rules limiting Section 230?
That's the big question. The FCC's top lawyer says it does. But Democrats and watchdog groups, such as Public Knowledge, say that the FCC does not have the authority to impose these regulations. Critics argue the law contains no language giving the FCC or other federal agency explicit the authority to make rules that limit what an online company can do. It only addresses questions of who can be sued and on what grounds.
But the FCC argues that the agency's authority to regulate Section 230 comes from the Communications Act.
Most experts say the FCC would likely be challenged in court if the agency were to impose any rules around Section 230. And it will be the courts that will decide whether the agency is overstepping its authority or not.
Still one thing is clear. Any role in policing social media would be awkward for the FCC, which has cast itself as anti-regulation under Ajit Pai, its Trump-appointed chairman.
Can the president direct the FCC to take action or make new rules?
No. The FCC is an independent federal agency. Even though commissioners at the agency are appointed by the president, the FCC doesn't take directives from the executive branch. Instead, it gets its authority from Congress. That means the only way the FCC would be able to make rules limiting or clarifying Section 230 would be for Congress to pass a law giving it that authority.
The president's executive order takes this into consideration. It's worded carefully to direct the Commerce Department to ask the FCC to consider a petition asking it to make new rules.
Doesn't the FCC have authority to make sure that content on TV or radio is fair and balanced? Why can't it do that for the online world?
Actually, the FCC hasn't had a so-called Fairness Doctrine, which required broadcast license holders to present opposing perspectives on controversial or political issues, since 1987. But even if it did have such a policy for TV and radio, the agency wouldn't be able to apply the same rules to social media companies, because it has no authority to regulate those companies.
In fact, the current FCC, under the Trump administration, explicitly cited Section 230, which states Congress' intent to keep the internet unregulated, as an argument for repealing the Obama-era net neutrality rules that imposed regulations on broadband providers.
It's contradictory for Pai and the other Republicans on the FCC to argue that the agency should regulate social media companies, when they stripped the agency of its authority to regulate broadband companies like Comcast or Verizon, says Gigi Sohn, a distinguished fellow at the Georgetown Law Institute for Technology Law & Policy.