A 25-year-old law that shields social media companies from lawsuits over content their users post is once again under attack. This time it's because a Facebook whistleblower, who leaked thousands of internal documents about the company, testified before Congress and urged greater oversight over the company for allegedly creating products that "harm children, stoke division and weaken our democracy."
On Tuesday, Frances Haugen, a former Facebook product manager, sat before a Senate subcommittee for more than three hours and described how the social media giant has prioritized its profits over public good. She called on lawmakers to take action to "change the rules that Facebook plays by and stop the many harms it is now causing."
"We now know the truth about Facebook's destructive impact," she said. "I came forward at great personal risk, because I believe we still have time to act. We must act now. I'm asking you, our elected representatives, to act."
Watch this: Facebook whistleblower reveals 'disastrous' inner workings of social network
Haugen, who outed herself Sunday evening during an interview with CBS news program 60 Minutes, has released tens of thousands of pages of internal documents detailing the ways Facebook hid internal research the company had conducted that showed its platform algorithms and business model put the health and well being of the public, especially teens, at risk. She gave these documents to the Security and Exchange Commission, Congress and reporters at The Wall Street Journal.
In her testimony, she called on Congress to regulate Facebook and require more transparency from the company on its practices. She also urged lawmakers to reform a key federal law, Section 230 of the Communications Decency Act, which shields internet companies from legal liability for content posted by its users. But she warned that focusing reforms that would make Facebook only liable for content that its users' post would not be sufficient to fix Facebook's problems.
Instead, she suggested that lawmakers revise Section 230 to make Facebook responsible for its algorithms, which are used to rank content. In doing that, Haugen thinks the company would get rid of engagement-based ranking, which feeds a cycle of feeding harmful, inflammatory or untrue content to users.
"Modifying Section 230 around content is very complicated because user generated content is something that companies have less control over," she said. "They [Facebook] have 100% control over their algorithms. And Facebook should not get a free pass on choices it makes to prioritize growth, virality and reactiveness, over public safety."
Haugen's testimony comes as Congress's scrutiny of the world's largest social network intensifies and as US lawmakers of both political parties look to make Facebook and other major tech platforms more accountable for harm done to users.
"Right now, [Facebook] has broad immunity," Senator Richard Blumenthal, a Democrat from Connecticut, said during the hearing Tuesday. "You can't sue Facebook. You have no recourse."
Watch this: Whistleblower tells Congress that Facebook is hurting children, privacy, democracy
He said that Section 230 should be reformed. It's a common refrain from both Democrats and Republicans on Capitol Hill, who generally agree that changes need to be made to Section 230. As a result there's already a slew of legislation aimed at reforming the liability shield that's being considered.
Calls for reform have taken on new urgency as social media sites battle a flood of troubling content, including disinformation about the coronavirus vaccines, the outcome of the US presidential election and the deadly attack on the US Capitol, and now information that Facebook allegedly knowingly serves up harmful and divisive content to its users to drive engagement even when it knows this content has been linked to users' depression, self-harm and even suicide.
Republicans have widely called for the reform or repeal of the law because of their perception that the Silicon Valley powerhouses are biased against conservative views and work to censor conservatives, like former President Donald Trump, while giving liberal politicians a pass.
Democrats agree that reforms are needed, but they see the problem differently, arguing that Section 230 prevents social media companies from doing more to moderate their platforms, such as taking down or limiting hate speech and misinformation about COVID-19.
Tech companies say Section 230 protections, which shield them from liability for their users' posts and also let them moderate harmful content without facing repercussions, allowed online platforms to flourish in the early days of the internet. And they see even well-intentioned reforms as potential threats to the internet.
"Targeting Section 230 is a messy, false solution that would undermine human rights, do more harm than good, and actually solidify Facebook's monopoly power," Fight for the Future director, Evan Greer, said in a statement. "Instead, there is a much clearer path: for Congress to pass a Federal data privacy law strong enough to effectively kill Facebook's current business model. They should do that without delay."
To help you better understand what Section 230 is and how lawmakers may revise it to rein in the power of social media giants, like Facebook, we've put together this FAQ.
What is Section 230?
Section 230 is a provision of the 1996 Communications Decency Act that protects companies that host user-created content from lawsuits over posts on their services. The law shields both internet service providers, like AT&T, Comcast and Verizon, and social media and internet platforms, like Facebook, Twitter and Google.
Section 230 isn't blanket protection. There are exceptions for federal crimes or intellectual property claims. A company could still be held accountable if it knowingly allowed users to post illegal content.
The law provides social media companies with sweeping protections that let them choose what content they restrict, and how. This means social media platforms can't be sued for taking down content or leaving it up.
By eliminating liability risk, Section 230 has allowed companies to experiment. Without it, Twitter and Facebook almost assuredly wouldn't exist, at least not as they do now. And it isn't just big companies that gain from the law. Nonprofits have benefited too. Many experts say the law has enabled the internet to develop into a medium that allows ideas and political discourse to flow freely.
OK. So what are the problems with Section 230?
Most of the problems around Section 230 involve which posts social networks allow to stand and which ones they remove.
Democrats are most concerned about getting big social media companies to take down hate speech, harassment, disinformation and terrorism-related content. Democrats have even accused the companies of using the liability protections as a means to profit from the lies spread on their platforms.
Republicans allege that social media companies censor conservative viewpoints. This is a narrative Trump used in the lead-up to the 2020 presidential election when Twitter and Facebook began slapping warning labels on his posts for containing inaccurate information.
After the election, Trump used social media to falsely claim victory. Following the deadly attack on the Capitol, he was banned by Twitter, Facebook and other social media platforms.
How is Congress proposing to fix these issues?
As the rhetoric around Section 230 has heated up, lawmakers on both sides of the political aisle have introduced a flurry of legislation over the past year. Some call for liability protections to go away entirely, while others want to alter or refine the protections. Other bills entirely strip away liability protections and would have companies earn those protections by showing they're politically neutral in how they moderate content.
But the fact that so many bills have been introduced -- some of which overlap in scope, and some of which are vastly different in approach -- is perhaps a good indication that there's no easy fix for this issue.
Where do the executives of the big companies -- Facebook, Google and Twitter -- stand on regulation?
Facebook CEO Mark Zuckerberg has expressed support for changing Section 230. Google CEO Sundar Pichai has said he has concerns about changing or repealing the law, noting during a congressional hearing earlier this year that there could be unintended consequences that make content moderation tougher or that harm free expression. Twitter CEO Jack Dorsey echoed Pichai's concerns that restrictions could be difficult to enforce and could have unintended consequences, especially for smaller platforms.
What do tech advocates think of these reforms?
The Electronic Frontier Foundation, Fight for the Future and other groups have argued it'll do more harm than good to change liability protections to address the many concerns people have with social media companies. They fear these regulations could lead to more censorship as social media companies try to minimize their legal risk.
They say there are better ways to address the issues that concern lawmakers than gutting Section 230. For example, they say stronger federal data privacy legislation could limit the ability of companies like Facebook to target misinformation directly to the people most susceptible to it. They also say increased antitrust enforcement and the restoration of net neutrality protections would help limit the monopoly power of these companies. And they want to see greater transparency in regard to how algorithms manipulate content and newsfeeds.
"We don't agree that more aggressive content moderation on its own will address the harms of Big Tech," said Fight for the Future's Greer. "And we fear that without structural changes, more aggressive platform moderation and content removal will disproportionately harm marginalized people and social movements."
Didn't then-President Trump issue an executive order directing the FCC to write regulations on Section 230?
Yes. Trump heightened the debate over Section 230 in May 2020 when he issued an executive order directing the Federal Communications Commission to establish regulations that clarify the parameters of the good-faith effort Section 230 requires online companies to make when deciding whether to delete or modify content. At the heart of Trump's executive order was the claim that social media sites censor conservative viewpoints they disagree with.
Does the FCC have any authority to make rules limiting Section 230?
That was the big question. The FCC's top lawyer said it did. But Democrats and watchdog groups, such as Public Knowledge, said the FCC doesn't have the authority to impose these regulations.
Then the election happened. Trump lost, Biden won, and the chairman of Trump's FCC, Ajit Pai, didn't pursue writing new regulations.
What's President Joe Biden's stance on Section 230?
Biden has also weighed in on the flood of inaccurate health information. Last week, he said that vaccine misinformation on platforms like Facebook is "killing people," and he pressed the social network to do more to combat it. In response, Facebook said it's taken down more than 18 million pieces of COVID-19 misinformation.
The White House said last week that it's reviewing whether social media platforms should be held legally accountable for publishing misinformation, via Section 230.
This isn't Biden's first attack on Section 230. When he was a candidate for president, he argued that social media companies don't deserve protection because they knowingly allow false information on their platforms.