CNET también está disponible en español.

Ir a español

Don't show this again

Christmas Gift Guide
Tech Industry

Facebook to hire 3,000 for its front line against violent videos

With deaths and beatings marring Facebook News Feeds, Mark Zuckerberg said he will boost the staff handling reports of violent broadcasts by two-thirds.

Now Playing: Watch this: Facebook hiring more staff to review graphic content
1:32
facebooklive.png

Facebook is adding 3,000 more people to its team to monitor reports of inappropriate content, like violent videos.

Facebook

Facebook is doubling down (almost) on the humans it puts on its line of defense against the kind of violent broadcasts that stoked an outcry last month.

CEO Mark Zuckerberg said Wednesday that Facebook will hire 3,000 more people over the next year to monitor reports about violent videos and other objectionable material. That team already had 4,500 people reviewing millions of reports every week, he said Wednesday in a post on his own Facebook page.

"Just last week, we got a report that someone on Live was considering suicide. We immediately reached out to law enforcement, and they were able to prevent him from hurting himself," Zuckerberg said. "In other cases, we weren't so fortunate."

The company, however, has attracted more attention for its gruesome monitoring misses than its success stories. Last month, the company was criticized for hosting a video of a Thai man killing himself and his infant daughter, a live broadcast that reportedly remained up for nearly a day.

It was the latest in a string of shocking videos that raised questions about Facebook's ability to handle the damaging downside to its aggressive video campaign. Over the last year, Facebook has amped up video -- and particularly live video -- in the algorithm recipe for your News Feed. With more than 1 billion daily visitors, Facebook offers a nearly unrivaled audience, but as the company makes its broadcasting tools more accessible, scenes of violence have also grown more common.

Facebook isn't alone in this dilemma. Any company that allows people to share media runs the risk of graphic uploads. Google's YouTube, for example, also hosted a copy of the Thai video. (That upload was removed within 15 minutes, that company said.)

But Facebook has been inconsistent in its response to graphic content.

While it has broadcast violent crimes on one hand, the company also has taken flak for removing material with social significance, like a live stream showing the aftermath of a black man shot at a traffic stop in July and a posting of an iconic Vietnam war photo because it included child nudity.

People broadcast on Facebook for a range of innocuous reasons, but some "want to be the latest sensation, even if it's very dark and evil stuff," said Betsy Page Sigman, a teaching professor at Georgetown University's McDonough School of Business who focuses on technology and social media.

While it's a positive step Facebook is adding response workers, she noted that the staffing expansion would be most effective if it's married with stronger technology and predictive analytics to identify troublesome videos before they're flagged by viewers.

"It's funny that they're not talking about that. They're talking about people," she said.

In Wednesday's post, Zuckerberg also reiterated the company's recent pledge to simplify how users can report objectionable material, speed up the time it takes reviewers to determine which posts violate its community standards, and ease the process of contacting law enforcement if someone needs help.

Beyond that, he didn't specify how Facebook would be changing the protocols for its widened monitoring workforce. The team responds to reports of objectionable material and doesn't directly monitor uploads unless they reach a threshold of popularity, according to the company.

With artificial intelligence at its current state of development, the most effective technology combines AI and human oversight, according to Ryan Detert, the CEO of Influential, a social-marketing company that taps the cognitive computing of IBM Watson.

"The hiring of extra employees will surely help in catching and stopping objectionable content, but it isn't necessarily a fail-safe," he said. "What we will be able to do with photo and video AI in the very near future will likely be the best defense, in terms of flagging objectionable content most quickly."

For Facebook, which makes most of its money from advertisements, adding the extra manpower could alleviate marketer worries.

In the wake of the violent video backlash, "there's got to be a significant portion of their advertisers that have asked, 'At what point is enough enough that I pull the handle and flush the toilet?'" said Harry Kargman, CEO of Kargo, a mobile ad company. Increasing its monitoring workforce is a signal Zuckerberg is taking the situation seriously, he said.

Facebook COO Sheryl Sandberg said in a reply to Zuckerberg's post that keeping people safe is the company's top priority. "We won't stop until we get it right," she said.

CNET's Richard Nieva contributed to this report.

Originally published at 7:38 a.m. PT.
Updated at 8:11 a.m. PT, 10:30 a.m. PT and 12:40 p.m. PT: Added comment from experts and Sheryl Sandberg, as well as background information.

CNET Magazine: Check out a sampling of the stories you'll find in CNET's newsstand edition.

Solving for XX: The industry seeks to overcome outdated ideas about "women in tech."