X

Hiding Hate in Plain Sight: How Social Media Sites Misread Sarcasm

Irony can make it tough for social networks to determine a user's intent and provides a level of plausible deniability.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
6 min read
A woman's silhouette is holding a smartphone, with an Instagram logo in the background

Instagram relies on automated technology and human reviewers to moderate hate speech.

Getty Image

Sara Aniano was reading through a racist screed allegedly written by the Buffalo, New York, shooting suspect when a friend pointed out the infographics in the 180-page document. Aniano, a disinformation researcher, remembered seeing the same content on an Instagram account she started tracking last year.

The graphics about the supposed influence of Jewish people came from a website claiming to celebrate the achievements of Jews. The website had several social media accounts, including one on Instagram. But Aniano suspected the website was trying to hide its true intentions -- fueling hatred of Jews -- by using ironic language.

One graphic showed the number of Jewish presidents at Ivy League universities, insinuating that Jewish people control education. Another stated Jews had largely pioneered modern American liberalism. The graphics appear designed to bolster a bogus conspiracy theory that Jews are trying to replace white Americans with nonwhite immigrants, a fraudulent claim that appears to have partly motivated the killing of 10 Black people in a Buffalo supermarket.

"At the end of the day, of course, they're not celebrating these people," said Aniano, who has spoken with CNET in the past and now works for the Anti-Defamation League. "They're putting targets on their backs."

Aniano reported the account to Instagram multiple times last year, but it remained online. In June, the Meta-owned social media platform pulled the account, which had been online for two years and had roughly 18,000 followers, after CNET inquired about the account in late May. The company said the site violated its online rules but didn't specify which ones. (CNET isn't naming the account in order to prevent driving traffic to the affiliated website.) The website didn't respond to a request for comment. 

Instagram's slow response underscores the challenges social networks face when policing content that uses humor, sarcasm or irony to conceal their true motives. Social networks have long struggled to balance free expression and online safety, a task that's only gotten tougher as extremists try to evade detection. Ahead of the 2022 US midterm elections, extremist violence is a growing concern. 

From January to March, Instagram took action against 3.4 million pieces of content for hate speech. The flood of posts on the Meta-owned social network means that not all reports are reviewed by a human moderator. Instead, Instagram relies on automated technology that can't always detect irony. Even human moderators can find it difficult to determine a user's intent, making irony an effective tool for evading detection.

The account that Aniano flagged didn't generate a human review. "Our technology has found that this account likely doesn't go against our Community Guidelines," the platform initially said in what appeared to be an automated response.

Meta, Facebook and Instagram's parent company, relies on a mix of human moderators and automated technology to police content. The social network has been trying to improve artificial intelligence so it does a better job of understanding the connection between words and images in memes, which can use inside jokes.

A slow response 

Instagram isn't the only social media service the website used for promotion. Accounts popped up on Facebook, Twitter , TikTok, Telegram, 4chan and Patreon. The Buffalo gunman could have encountered the site's material on any of them. 

"It surprised me in the sense that I was right about its influence," Aniano said. "But it didn't surprise me that somebody that deep in the hole of hate speech and anti-Semitism got a hold of it."

Investigators work the scene of a mass shooting at a Buffalo, New York, grocery store in May.

A white gunman fatally shot 10 people at a grocery store on May 14 in a historically Black neighborhood of Buffalo, New York. The shooting is being investigated as a hate crime and an act of racially motivated violent extremism.

Kent Nishimura / Los Angeles Times via Getty Images

Katie McCarthy, a researcher at the Anti-Defamation League's Center on Extremism, said content from the website has been used to promote anti-Semitic tropes online. But the ADL has had difficulty figuring out who runs the site, which is registered by Withheld for Privacy, an Iceland-based privacy service. For example, the ADL has been unable to confirm the site's claim that it's run by two Jewish people.

Using humor and sarcasm allows extremists to outsmart bans, McCarthy said. They can always "claim that they're just joking," she said, adding it's a form of "plausible deniability."

Meta responded slower than other platforms. In June, the company also removed a Facebook account for the website that had 2,100 followers. On Instagram, users raised questions in the account's comments about why it highlighted some controversial figures, such as American producer Harvey Weinstein, who was sentenced to 23 years in prison for sex crimes. One Instagram user commented that the account was "a sarcastic page," while another said it was run by "antisemites" pretending to be Jewish. 

On Instagram, the website directed people to a Patreon page that asked users to contribute $2 a month to support the creation of "blogposts and tools to fight misinformation." Patreon said it removed the account for violating its "Hate Speech guidelines by propagating negative stereotypes and segregational content."

Screenshots of the website's Twitter account archived in The Wayback Machine show the company suspended it by December 2021. Twitter said the account violated its ban evasion policy that "attempts to circumvent prior enforcement, including through the creation of new accounts." The website created the account in 2020 and had more than 15,000 followers on Twitter by the time the user was suspended. Twitter also hides direct messages with the website with a notice that says it could be "suspicious." Users can still click through the notice to view the message.

The website also had a TikTok account with 331 followers but didn't post as frequently on the short-form video app. One video about Jewish people in US President Joe Biden's administration tallied 28,600 views. In late May, TikTok pulled the account for violating its rules, though it didn't specify which ones.

The website is still sharing content on messaging platform Telegram, where it has roughly 10,000 subscribers. Telegram didn't respond to a request for comment.

Removing these accounts, Aniano said, helps in dialing down the toxic message's spread.

"Taking it down doesn't mean the ideology goes away," she said. "But it gives the broader public less opportunity to access it themselves." 

The site's graphics include crude stereotypes about the wealth and influence of Jewish people. For example, one shows photos of 25 hedge fund managers pointing out that two-thirds of them are Jewish, an unsubtle suggestion that Jews are obsessed with money. 

People can easily download the graphics and use them like memes that circulate on the internet. In some of them, photos of the Jewish people are visible, while images of non-Jewish people are obscured.

An ongoing problem

Extremists can also create fake social media accounts to hide their intentions, a problem social networks have grappled with for years. Facebook and other social networks look at the behavior of a network of accounts rather than the content when they crack down on attempts to manipulate public debates.

In 2019, American journalist Yair Rosenberg tweeted that extremists were using anonymous image board site 4chan to urge people to create a "massive movement of fake Jewish profiles on Facebook, Twitter" and other social networks noting it has the "benefit of being uncensored by big tech."

In 2020, CNN reported that a fake Twitter account claiming to be Jewish tried to fuel tensions between Jewish and Black Americans. 

Kesa White, a program research associate at American University's Polarization and Extremism Research and Innovation Lab, said moderating ironic content can be tough because extremists have also revised memes to evade detection.

It isn't enough, she said, for social media platforms to rely on artificial intelligence or human moderators. They need experts, such as researchers who are embedded in these communities and know what to look out for. 

"There's so many different layers to it that are changing everyday, which make this so much more difficult for researchers and social media companies to keep up with," White said. 

CNET's Oscar Gonzalez contributed to this report.