Tech giants and political leaders from around the globe came together in Paris on Wednesday to pledge their commitment to tackling the spread of terrorist content online, following the attack in Christchurch, New Zealand, in March.
In a summit hosted jointly by French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern, companies including Facebook, Twitter and Google agreed to sign the "Christchurch Call." The pledge was created in response to the New Zealand terrorist using Facebook to livestream part of the murders of 51 people at two mosques. The tech giants also agreed to a nine-point plan that outlined ways they're going to work more collaboratively than they have in the past to combat this problem.
The move highlights how tech companies plan to take action after failing to stop the live video of the New Zealand attack from spreading on social media sites. The video wasbefore it was removed. More than a month after the attack, videos of the attack were , illustrating just .
Eight tech companies signed the Christchurch Call, including Microsoft, Amazon and YouTube, along with 17 national governments and the EU. But one significant name is missing from the list of supporters -- the White House announced it would not be signing the commitment due to free speech concerns.
The White House Office of Science and Technology Policy issued a statement that said while the US supports the goals of the Christchurch Call, it's "not currently in a position to join the endorsement."
"We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press," the statement reads.
Supporters of the call said in a joint statement that signing the pledge will strengthen partnerships between governments, society and tech companies.
"Terrorism and violent extremism are complex societal problems that require an all-of-society response," supporters said in the statement.
Tech giants also said they would be taking actions individually to combat terrorist content. That includes updating their rules against terrorist content, giving users ways to report terrorist and violent extremist content, investing in technology, checking who is livestreaming and publishing reports that include the amount of terrorist and violent content that's detected and removed. The companies could vet who is streaming a live video by including ratings for people who use the tool and looking at an account's activity.
On Tuesday, Facebook said it would ban users fromfor a period of time if they've broken certain rules on the social network, including its policy against terrorist content. The company has pushed back against other ideas though, such as delaying the broadcast of a live video.
The companies that support the pledge also outlined steps that will help them work better together. Those actions include sharing data and tools to improve technology aimed at combating terrorist content and educating users about how to report the offensive content and why they shouldn't spread it online. The companies also plan to support research into combatting hate and bigotry and creating a system for responding quickly to an emerging or active event.
Originally published May 15, 9:40 a.m. PT.
Update, 10:39 a.m. PT: Adds White House statement.
Update, 12:27 p.m. PT: Adds more background.