Twitter AI bias contest shows beauty filters hoodwink the algorithm

The service's algorithm for cropping photos favors people with slimmer, younger faces and lighter skin.

Stephen Shankland principal writer
Stephen Shankland has been a reporter at CNET since 1998 and writes about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science Credentials
  • I've been covering the technology industry for 24 years and was a science writer for five years before that. I've got deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and other dee
Stephen Shankland
2 min read
Twitter app icon and logo
Stephen Shankland/CNET

A researcher at Switzerland's EPFL technical university won a $3,500 prize for determining that a key Twitter algorithm favors faces that look slim and young and with skin that is lighter-colored or with warmer tones. Twitter announced on Sunday it awarded the prize to Bogdan Kulynych, a graduate student examining privacy, security, AI and society.

Twitter sponsored the contest to find problems in the "saliency" algorithm it uses to crop the photos it shows on your Twitter timeline. The bounty that Twitter offered to find AI bias is a new spin on the now mainstream practice of the bug bounties that companies pay outsiders to find security vulnerabilities.

AI has revolutionized computing by effectively tackling messy subjects like captioning videos, spotting phishing emails and recognizing your face to unlock your phone. But AI algorithms trained on real-world data can reflect real-world problems, and tackling AI bias is a hot area in computer science. Twitter's bounty is designed to find such problems so they eventually can be corrected.

Earlier this year, Twitter itself confirmed its AI system showed bias when its cropping algorithm favored images of white people over Black people. But Kulynych found other problems in how the algorithm cropped photos to emphasize what it deemed most important.

Twitter salience AI bias research

Researcher Bogdan Kulynych found that Twitter's AI algorithm often favored younger, lighter-skinned and slimmer variations of an original photo. Twitter's "salience" score, used to determine how to crop photos, increased 35%, 28% and 29%, respectively, for the rightmost variations on the top, middle and bottom sequences shown here. 

Bogdan Kulynych

"The target model is biased towards deeming more salient the depictions of people that appear slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits," Kulynych said in his project findings. "This bias could result in exclusion of minoritized populations and perpetuation of stereotypical beauty standards in thousands of images."

Kulynych's system compared the saliency of an original photo of a human face to a series of AI-generated variations. He found salience scores often increased with faces that appeared younger and thinner. The algorithm also issued higher scores for skin that was lighter, warmer toned, higher contrast and with more saturated colors.

Twitter praised the contest entry as important in a world where many of us use camera and editing apps that apply beauty filters before we share photos with friends or on social media. That can distort our expectations of attractiveness.

Beauty and apps filters are widespread. Facetune, one top app, promises to help you "stand out on social media." B612, another popular filter, offers a "smart beauty" tool that can recommend changes to your face shape and other appearance changes. But concluding that beautification filters can "negatively impact mental well-being," Google disabled its automatic touch-ups by default in its Pixel camera app. It also stopped calling its adjustments "beauty" filters.