CNET también está disponible en español.

Ir a español

Don't show this again

Internet

Facebook wants to be a 'hostile environment' to terrorists

The comments come after an attack in London this weekend left seven people dead.

Jeremy Corbyn Responds To The London Bridge Terror Attacks

A child holds up a message for the victims of the London terrorist attacks at the County Hotel in Carlisle, England.

Jeff J Mitchell / Getty Images

Facebook is stepping up its war on terrorist activities.

The social networking giant said Sunday it plans to be hostile to terrorists and pledged to "aggressively remove terrorist content" after an attack in London left at least seven dead this weekend.

"We want Facebook to be a hostile environment for terrorists," Simon Milner, the company's director of policy, said in a statement.

The comments come after British Prime Minister Theresa May called for new regulations to restrict the spread of extremist content.

"We cannot allow this ideology the safe space it needs to breed, yet that is precisely what the internet and the big companies that provide internet-based services provide," she said in a statement outside Downing Street.

Social media sites have become a popular conduit for terrorist groups to share their ideas and recruit people to the cause. These companies have strict rules against posting hate speech and will remove such content and accounts when they're discovered. But the task of searching through the activity of hundreds of millions of users is challenging.

Some users are trying to hold social media companies responsible for terrorists' online activity. Last month the families of the San Bernardino shooting victims sued Facebook, Google and Twitter, accusing them of knowingly allowing terrorist activity to take place on their respective social media platforms.

Facebook wants to "provide a service where people feel safe. That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism," Milner said.

"Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it," he said.

Facebook said in December it had formed a partnership with Twitter, YouTube and Microsoft to create a shared database of images and videos that promote terrorism. The database will store and share among partners the "hashes" -- or unique digital fingerprints -- of terrorist content that has been removed from the services for violating their community policies.

Images added to the database won't automatically be banned by partner sites. Each company will contribute images and base their decisions on whether to remove content that matches a shared hash based on their own policies and definitions of terrorist content.

Virtual reality 101: CNET tells you everything you need to know about VR.

CNET Magazine: Check out a sample of the stories in CNET's newsstand edition.