X

AI's social justice problem: It's amplifying human bias

Coded Bias director Shalini Kantayya talked to CNET about the algorithms that are making society's most important decisions for us.

Jason Hiner VP of Content, CNET Labs; Editor in Chief, ZDNET
Jason is Vice President of Content for CNET Labs and the Editor in Chief of ZDNET. From CNET's world-class testing facilities, the CNET Labs team designs tests to quantify the best products worth buying.
Expertise Over 20 years of experience covering Apple, Google, Microsoft, Samsung, computers, smartphones, wearables, and the latest technology products. Credentials
  • Jason is an award-winning journalist, editor, and video host with a passion for covering cutting edge innovation.
Jason Hiner
2 min read

Artificial intelligence now plays a key role in deciding who gets jobs, who gets into colleges, who gets loans, who gets accused of crimes and so much more. But recent work from researchers has shown that the algorithms driving AI are inheriting -- and in some cases even amplifying -- the biases behind the inequalities and injustices in our society, especially for women and people of color. 

A documentary called Coded Bias just landed on Netflix this month and it tackles this issue head on. Director Shalini Kantayya joined CNET's Now What series to talk about the film.

Read more: Coded Bias review: Eye-opening Netflix doc faces racist technology

Kantayya talked about how the documentary opened her eyes to one of the biggest challenges society is facing as we move into the AI age.

"I hadn't realized the extent to which algorithms, machine learning and AI are increasing becoming the gatekeeper of opportunity -- deciding such important things as who gets hired, who gets what quality of healthcare, even who gets the [COVID-19] vaccine, or how long a prison sentence someone might serve," Kantayya said. 

nowwhat-logo.png

"So as I started to understand the extent to which we are outsourcing our decision-making to machines … I began to realize that these same systems that we're trusting so implicitly with decisions that are essentially changing human destiny have not been vetted for racial bias or gender bias or -- or more broadly that they won't hurt people or have unintended consequences and cause harm."

We also talked to Kantaya about the conflict between the Silicon Valley narrative of AI and the Hollywood narrative of AI, which Princeton professor Ruha Benjamin brought up in her appearance on Now What.

Watch the whole interview with Kantayya and then catch the documentary on Netflix.


Now What is a video interview series with industry leaders, celebrities and influencers that covers trends impacting businesses and consumers amid the "new normal."  There will always be change in our world, and we'll be here to discuss how to navigate it all. 

The information contained in this article is for educational and informational purposes only and is not intended as health or medical advice. Always consult a physician or other qualified health provider regarding any questions you may have about a medical condition or health objectives.