X

UK Online Harms: Social networks must keep users safe or face huge fines, bans

Senior execs from social media companies could also face criminal charges if they fail to keep users safe.

Katie Collins Senior European Correspondent
Katie a UK-based news reporter and features writer. Officially, she is CNET's European correspondent, covering tech policy and Big Tech in the EU and UK. Unofficially, she serves as CNET's Taylor Swift correspondent. You can also find her writing about tech for good, ethics and human rights, the climate crisis, robots, travel and digital culture. She was once described a "living synth" by London's Evening Standard for having a microchip injected into her hand.
Katie Collins
4 min read
Streaks of light along Westminster Bridge, with Parliament buildings in background

The UK government is preparing to crack down on tech companies.

Lingxiao Xie/Getty

The UK government on Tuesday unveiled its long-awaited Online Harms legislation proposals, which would force tech companies to keep people safe online in what it is calling "a new age of accountability" for social media. Companies that fail to abide by the rules will face fines of up to £18 million ($24M) or 10% of annual global turnover, whichever is higher, or could have their services blocked in the UK. The legislation also allows for criminal sanctions to be imposed on senior managers.

The proposals are set to be outlined on Tuesday in Parliament by Digital Secretary Oliver Dowden and Home Secretary Priti Patel. The government says that keeping children safe is at the heart of the proposals, which will give online companies a legal care of duty toward their users. It plans to bring the proposals forward in an Online Safety Bill next year.

"I'm unashamedly pro tech but that can't mean a tech free for all," said Dowden in a statement. "We are entering a new age of accountability for tech to protect children and vulnerable users, to restore trust in this industry, and to enshrine in law safeguards for free speech."

The UK's crackdown on what it calls online harms comes at a time when many world powers are at the point of enacting regulation that woul force changes in tech companies. Also on Tuesday, the EU is set to unveil dual pieces of legislation, the Digital Services Act and the Digital Markets Act. Meanwhile, the US is bringing antitrust investigations against Google and Facebook and discussing the possibility of breaking up tech giants altogether.

In the UK, discussions about forcing tech companies to take more responsibility for keeping users safe has been going on since 2017, when the Digital Economy Act stipulated that some form of age verification be put in place to protect children from viewing online pornography . The Online Safety Bill will supposedly supersede this.

All companies will have to do more to protect children from grooming, bullying and pornography, according to the proposals laid out by the government on Tuesday. They also include protections for users of all users against content that is illegal (child sexual abuse, terrorist material and suicide content), as well as certain types of content that is legal but could be harmful. The government gives the example of content that spreads misinformation and disinformation about the coronavirus vaccine.

The largest and most popular social platforms (which the government says will likely include Facebook, TikTok, Instagram and Twitter) will need to go further than other companies in protecting against these particular harms. They will have to set and enforce clear terms and conditions that explicitly state how they will handle content that is legal but poses "a reasonably foreseeable risk of causing significant physical or psychological harm to adults." They'll also be required to publish transparency reports about how they're doing this.

All companies will need to provide reporting features and will also need to a provide a mechanism for people to appeal takedowns.

The proposals follow the publication of an Online Harms White Paper, published in the spring, in which the government said it would name a safety czar to regulate social media. That regulator has now been confirmed as Ofcom, the UK's broadcasting and telecommunications watchdog. "We're gearing up for the task by acquiring new technology and data skills, and we'll work with Parliament as it finalises the plans," said Ofcom's Chief Executive Melanie Dawes in a statement.

As well as social media platforms, the rules will apply to search engines, video sharing and instant messaging platforms, online forums, dating apps, commercial pornography websites, as well as online marketplaces, peer-to-peer services, consumer cloud storage sites and video games which allow online interaction. They will not apply to online news sites or their comments sections.

Twitter said it was committed to keeping people safe online.

"We support regulation that is forward thinking, understanding that a one-size-fits all approach fails to consider the diversity of our online environment," a Twitter spokesperson said. "We welcome the increased focus on the protection of those who use online services themselves and look forward to reviewing the Government's full response to the UK Online Harms consultation."

Facebook said regulations are necessary to help keep harmful content off the internet.

"Protecting people from harm without undermining freedom of expression or the incredible benefits the internet has brought is a complex challenge," Rebecca Stimson, Facebook's head of UK Public Policy, said in a statement. "We look forward to continuing the discussion with government, Parliament and the rest of the industry as this process continues."

Representatives for TikTok and Instagram didn't respond to requests for comment.

Watch this: Facebook, Twitter CEOs face Senate questions (again)