X

President Trump wants social media to catch shooters before they strike. It's going to be hard

Artificial intelligence isn't there yet.

Ian Sherr Contributor and Former Editor at Large / News
Ian Sherr (he/him/his) grew up in the San Francisco Bay Area, so he's always had a connection to the tech world. As an editor at large at CNET, he wrote about Apple, Microsoft, VR, video games and internet troubles. Aside from writing, he tinkers with tech at home, is a longtime fencer -- the kind with swords -- and began woodworking during the pandemic.
Ian Sherr
4 min read
President Trump Delivers Remarks On The Weekend's  Mass Shootings

President Donald Trump delivered remarks on the mass shootings in El Paso, Texas, and Dayton, Ohio Monday.

Getty Images

Some of the most horrific mass shootings have followed a chillingly similar script: Angry white men, driven to extremism in online forums like 8chan and Gab, post manifestos railing against minorities. When they begin to shoot, members of the message boards post responses that encourage them to kill more.

President Donald Trump says it needs to stop.

In a speech after two shootings left at least 31 people dead, Trump called on social media companies to identify mass shooters before they open fire.

"I am directing the Department of Justice to work in partnership with local state and federal agencies, as well as social media companies, to develop tools that can detect mass shooters before they strike," he said.

In theory, predictive policing online should be possible. Twitter, Facebook and YouTube have increasingly harnessed artificial intelligence and other technology to identify and act on bad behavior as they sift through billions of posts. They've been able to pull down terrorist propaganda from ISIS, for example, and they have programs that can often identify child pornorgraphy automatically.

22 Dead And 26 Injured In Mass Shooting At Shopping Center In El Paso

People gather near white handmade crosses memorializing the victims of a mass shooting which left at least 22 people dead in El Paso, Texas.

Getty Images

The challenge, experts say, is that correctly identifying these lone wolves is tougher than finding overt terrorist propaganda. One reason, for example, is it's hard to determine when a post may be preparation for a terrorist act, or merely someone spouting off.

Another problem is that message boards have changed the way extremists recruit to their causes. Many of these attackers know each other only online. Some may not interact directly.

"In the past, there would be a more terrestrial component to how hate groups would organize and recruit," said Brian Levin, who runs the Center for the Study of Hate and Extremism at California State University, San Bernardino. That means they'd meet somewhere in the real world to chat or exchange propaganda.

Manifestos online have taken the place of those real world connections. Manifestos reference other manifestos, effectively writing a new chapter in an expanding meta-book of hate. The writers almost always post anonymously. They rarely post overt threats because those would break the rules of most social media sites, which could get them kicked off and deprive them of a platform.

"The issue is can we get to these folks who while stealth, are delivering clues, oftentimes the last of which is right before their attack," Levin added.

Not always right

Of course, Facebook and Twitter have taken action, primarily against propaganda supporting ISIS and Al-Qaeda. The social media companies have occasionally identified takedowns of white supremacist material, but haven't provided macro data on the topic.

Twitter says it suspended 166,513 unique accounts for promoting terrorism during the second half of 2018. The company credited its internal tools for flagging 91% of the accounts.

"In the majority of cases, we take action at the account setup stage -- before the account even Tweets," Twitter said earlier this year.

Meanwhile, Facebook said it found more than 99% of ISIS and Al-Qaeda content before it was reported by the community in the six months between April and September 2018.

But experts say propaganda that lionizes terrorists is easier to identify as dangerous than an angry person spouting off about politics. And reading motive into hyperbolic tweets raises knotty questions about free speech.

"When we look at what predictive policing looks like, it always results in over-policing, arrests and prosecution of communities of color," said Brittan Heller, a fellow at Harvard's Carr Center for Human Rights, who previously worked for the Anti-Defamation League, the US Department of Justice, and International Criminal Court. "Whenever I hear people trying to predict criminality, as a former prosecutor, it makes the hair on the back of my neck stand up," she said.

Aside from the potentially thorny civil rights issues, the technology at Facebook, Twitter and YouTube is far from perfect. Their automated computer programs have screwed up plenty of times.

When Facebook put a computer in charge of selecting trending topics, it began sharing hoaxes and conspiracy theories instead of actual news stories. After a shooter killed 17 people at Marjory Stoneman Douglas High School in Parkland, Florida, the top trending video on YouTube accused David Hogg, a survivor, of being a "crisis actor."

AI may eventually get better at understanding hate-riddled posts. But Heller says Trump and other politicians need to look beyond technology for an answer to this growing domestic threat.

"It's less a question about the internet, and it's more a question about gun-based violence," Heller said. "We can't get to actual solutions if we keep blaming the virtual world."

CNET's Queenie Wong contributed to this report.