Culture

Google and Jigsaw puzzle out AI fix for toxic comments

To make comments sections more civil, Jigsaw is using machine learning to help publishers moderate.

googlejigsaw.jpg

Google and Jigsaw want to combat toxic online comments with AI.

JOSH EDELSON/AFP/Getty Images

There's a reason people caution you not to read the comments. Rather than serve as a forum for debate, they often devolve into, well, a cesspool.

In an effort to tone down the hate in comments sections, Google and its Alphabet corporate sibling Jigsaw, a technology incubator, launched on Thursday a machine learning tool that weeds out the nastier comments.

"Because of harassment, many people give up on sharing their thoughts online or end up only talking to people who already agree with them," Jigsaw product manager CJ Adams said in a statement.

The software, called Perspective, applies a score to comments based on similarities with other comments categorized as "toxic" by humans reviewers. Perspective has reviewed hundreds of thousands of such comments as a way of learning what's toxic and what's not. Drawing on its artificial intelligence roots, it learns as it goes.

Publishers have a few options for what to do with the info Perspective provides. Via the Perspective software, they can flag comments and let moderators take it from there, or they can show commenters themselves if their comments are considered toxic. Another use could be allowing readers to sort through comments.

Online harassment is all too common -- 72 percent of internet users in the US have witnessed it and 47 percent say they've experienced it themselves. Aside from simply being unpleasant, it can have a chilling effect on expression. Twenty-seven percent say they self-censor out of fear, according to a November study from the Data and Society Research Institute.

"To tackle the biggest and most important problems we face, we need better ways to have conversations at scale," Lucas Dixon, Jigsaw chief research scientist, said in a statement.

Other tech companies have introduced tools to fight hate online. In October, Microsoft launched a way to report online abuse for its services including Skype, Outlook and Xbox. Social media platforms like Twitter are also reacting to pressure to curb online harassment. In November, for example, Twitter expanded its mute function.

The New York Times is already testing Perspective to moderate comments, and other publishers will be able to apply for the tool Thursday, free of charge.

Solving for XX: The industry seeks to overcome outdated ideas about "women in tech."

Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.