X

Google scientists reportedly told to make AI look more 'positive' in research papers

The news follows the departure of AI ethicist Timnit Gebru this month.

Corinne Reichert Senior Editor
Corinne Reichert (she/her) grew up in Sydney, Australia and moved to California in 2019. She holds degrees in law and communications, and currently writes news, analysis and features for CNET across the topics of electric vehicles, broadband networks, mobile devices, big tech, artificial intelligence, home technology and entertainment. In her spare time, she watches soccer games and F1 races, and goes to Disneyland as often as possible.
Expertise News, mobile, broadband, 5G, home tech, streaming services, entertainment, AI, policy, business, politics Credentials
  • I've been covering technology and mobile for 12 years, first as a telecommunications reporter and assistant editor at ZDNet in Australia, then as CNET's West Coast head of breaking news, and now in the Thought Leadership team.
Corinne Reichert
2 min read
Google headquarters in Mountain View, California

Google is reportedly telling research scientists to spin AI in a more positive way.

Stephen Shankland/CNET

Google parent Alphabet has been asking its scientists to ensure that AI technology looks more "positive" in their research papers, says a Wednesday report by Reuters. A new review procedure is reportedly in place so researchers consult with Google's legal, policy or PR teams for a "sensitive topics review" before exploring things like face analysis, and racial, gender and political affiliation.

"Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues," one of the internal webpages on the policy says, according to Reuters.

Read more: Google CEO apologizes for handling of AI researcher Timnit Gebru's departure

Other Google authors were told to "take great care to strike a positive tone," internal correspondence shared with Reuters said.

The report follows Google CEO Sundar Pichai earlier this month apologizing for the handling of artificial intelligence researcher Timnit Gebru's departure from the company and saying it would be investigated. Gebru left Google on Dec. 4, saying she'd been forced out of the company over an email sent to co-workers.

The email criticized Google's Diversity, Equity and Inclusion operation, according to Platformer, which posted the full text of her missive. Gebru said in the posted email that she'd been asked to retract a research paper she'd been working on, after receiving feedback on it.

"You're not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything," she wrote in the email. "You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored.

"Silencing marginalized voices like this is the opposite of the NAUWU principles which we discussed. And doing this in the context of 'responsible AI' adds so much salt to the wounds," she added. NAUWU stands for "nothing about us without us," the idea that policies shouldn't be made without input from the people they affect. 

Google didn't immediately respond to a request for comment.