Galaxy Watch 5 Review Specialty Foods Online 'She-Hulk' Review Disney Streaming Price Hike Raspberry Girl Scout Cookie $60 Off Lenovo Chromebook 3 Fantasy Movies on HBO Max Frontier Internet Review
Want CNET to notify you of price drops and the latest stories?
No, thank you
Accept

EU Strengthens Disinformation Rules to Target Deepfakes, Bots, Fake Accounts

Phone featuring social media logos on top of EU flag
New EU rules are designed to make the internet better and safer.

What's happening

The EU strengthens rules around how tech companies tackle problems such as deepfake videos, bots and fake accounts.

Why it matters

The revised code means that tech companies could incur massive fines for failing to take action on disinformation.

What's next

Signatories to the code, including Meta, Google, Twitter and TikTok, have six months to show they're complying with the updated rules.

The European Commission on Thursday released an overhauled set of rules designed to stem the flow of disinformation. The EU's strengthened Code of Practice on Disinformation will hold signatories to the code liable for failing to take action by fining them up to 6% of their global revenue. Tech giants Meta, Google, TikTok and Twitter are among the signatories. 

Commissioners Věra Jourová and Thierry Breton said in a press conference that the updates address previous shortcomings. The revised rules cover "manipulative behaviors," including deepfake videos, bots and fake accounts, but also aim to eliminate financial incentives for the spread of disinformation by ensuring that disseminators of problematic content don't benefit from advertising revenue. Platforms have also been asked to give users new tools to recognize, understand and flag disinformation.

"Disinformation is a form of invasion of our digital space, with tangible impact on our daily lives," Breton said in a statement. "Online platforms need to act much more strongly, especially on the issue of funding. Spreading disinformation should not bring a single euro to anyone."

When the EU introduced its disinformation code in 2018, participation was voluntary and the rules were created as an alternative to legislation. Four years later, the problem of disinformation, which became apparent following the 2016 UK Brexit referendum and US presidential election, has become even more widespread, particularly related to the COVID-19 pandemic and Russia's war against Ukraine. 

"This new anti-disinformation code comes at a time when Russia is weaponizing disinformation as part of its military aggression against Ukraine, but also when we see attacks on democracy more broadly," Jourová said.

Europe's digital rules have also changed since the code was introduced, and the code is now underpinned by legislation -- specifically the Digital Services Act, which was finalized in April. The Digital Services Act is designed to hold digital platforms to account and will give the EU more power to monitor companies to ensure they're complying with the disinformation code.

Signing on to the code is still voluntary, and the EU has now opened it up to smaller companies alongside the existing signatories, which have all been major tech platforms. There are 33 signatories so far to the revised rules, more than double the previous number. Updating the rules has been a collaborative process with existing signatories, Breton said. And although the EU has been subject to lobbying by tech companies, Breton said that he and Jourová are used to pressure and that big tech companies didn't get their way on everything they asked for.

"Combating the spread of misinfo is a complex and evolving societal issue," Meta President of Global Affairs Nick Clegg tweeted Thursday. "We continue to invest heavily in teams and technology, and we look forward to more collaboration to address it together."

A Twitter spokesman said the company welcomed the updated code. "Through and beyond the code, Twitter remains committed to tackling misinformation and disinformation as we continue to evaluate and evolve our approach in this ever-changing environment," he said.

TikTok also weighed in.

"As a signatory to the Code of Practice on Disinformation since 2020, we're proud to have played our part in drafting this new code, and we look forward to furthering our work by joining forces to combat disinformation and promote authentic online experiences for our communities," said a spokeswoman for TikTok.

A spokesman for Google welcomed the introduction of the code, calling it "an important instrument in the fight against disinformation." "The global pandemic and the war in Ukraine have shown that people need accurate information more than ever and we remain committed to making the Code of Practice a success," he said.

But not all signatories were universally praising the code. If big tech platforms don't step up their actions, the code isn't worth the paper it's written on, said Luca Nicotra, campaign director at Avaaz, a nonprofit that's one of the signatories. "This is why we need monitoring with teeth from the EU Commission, that boldly flags platform failures," he said.

Journalism misinformation tracker NewsGuard, which is a new signatory, said in a statement that it's disappointed the Commission had only recommended big tech platforms provide indicators of how trustworthy content is, rather than making it mandatory.

"The code will continue to fail to protect users until the platforms are forced to provide independent information about the journalistic standards of the news and information spread and recommended by the platforms, which too often act as the useful idiots for propagandists such as those in the Kremlin," said NewsGuard co-CEO Gordon Crovitz in a statement.