Some of the most horrific mass shootings have followed a chillingly similar script: Angry white men, driven to extremism in, post manifestos railing against minorities. When they begin to shoot, members of the message boards post responses that encourage them to kill more.
President Donald Trump says it needs to stop.
In a speech after, Trump called on social media companies to identify mass shooters before they open fire.
"I am directing the Department of Justice to work in partnership with local state and federal agencies, as well as social media companies, to develop tools that can detect mass shooters before they strike," he said.
In theory, predictive policing online should be possible. Twitter, Facebook and YouTube have increasingly harnessed artificial intelligence and other technology to identify and act on bad behavior as they sift through billions of posts. They've been able to pull down terrorist propaganda from ISIS, for example, and they have programs that .
The challenge, experts say, is that correctly identifying these lone wolves is tougher than finding overt terrorist propaganda. One reason, for example, is it's hard to determine when a post may be preparation for a terrorist act, or merely someone spouting off.
Another problem is that message boards have changed the way extremists recruit to their causes. Many of these attackers know each other only online. Some may not interact directly.
"In the past, there would be a more terrestrial component to how hate groups would organize and recruit," said Brian Levin, who runs the Center for the Study of Hate and Extremism at California State University, San Bernardino. That means they'd meet somewhere in the real world to chat or exchange propaganda.
Manifestos online have taken the place of those real world connections. Manifestos reference other manifestos, effectively writing a new chapter in an expanding meta-book of hate. The writers almost always post anonymously. They rarely post overt threats because those would break the rules of most social media sites, which could get them kicked off and deprive them of a platform.
"The issue is can we get to these folks who while stealth, are delivering clues, oftentimes the last of which is right before their attack," Levin added.
Not always right
Of course, Facebook and Twitter have taken action, primarily against propaganda supporting ISIS and Al-Qaeda. The social media companies have occasionally identified takedowns of white supremacist material, but haven't provided macro data on the topic.
Twitter says it suspended 166,513 unique accounts for promoting terrorism during the second half of 2018. The company credited its internal tools for.
"In the majority of cases, we take action at the account setup stage -- before the account even Tweets," Twitter.
Meanwhile, Facebook saidbefore it was reported by the community in the six months between April and September 2018.
But experts say propaganda that lionizes terrorists is easier to identify as dangerous than an angry person spouting off about politics. And reading motive into hyperbolic tweets raises knotty questions about free speech.
"When we look at what predictive policing looks like, it always results in over-policing, arrests and prosecution of communities of color," said Brittan Heller, a fellow at Harvard's Carr Center for Human Rights, who previously worked for the , the US Department of Justice, and International Criminal Court. "Whenever I hear people trying to predict criminality, as a former prosecutor, it makes the hair on the back of my neck stand up," she said.
Aside from the potentially thorny civil rights issues, the technology at Facebook, Twitter and YouTube is far from perfect. Their automated computer programs have screwed up plenty of times.
When Facebook put a computer in charge of selecting trending topics, Marjory Stoneman Douglas High School in Parkland, Florida, the top trending video on YouTube accused David Hogg, ."instead of actual news stories. After a shooter killed 17 people at
AI may eventually get better at understanding hate-riddled posts. But Heller says Trump and other politicians need to look beyond technology for an answer to this growing domestic threat.
"It's less a question about the internet, and it's more a question about gun-based violence," Heller said. "We can't get to actual solutions if we keep blaming the virtual world."
CNET's Queenie Wong contributed to this report.