X

There’s still time to prevent biased AI from taking over the world

Autonomous systems might eventually require regulation, but experts say no single policy can govern all artificial intelligence.

Dan Patterson
Dan is a writer, reporter, and producer. He is currently a reporter for at CBS News and was previously a Senior Writer for TechRepublic.
Dan Patterson
2 min read
Getty Images

Artificial intelligence is ubiquitous. Mobile maps route us through traffic, algorithms can now pilot automobiles, virtual assistants help us smoothly toggle between work and life, and smart code is adept at surfacing our next our new favorite song.

But AI could prove dangerous, too. Tesla CEO Elon Musk once warned that biased, unmonitored and unregulated AI could be the "greatest risk we face as a civilization." Instead, AI experts are concerned that automated systems are likely to absorb bias from human programmers. And when bias is coded into the algorithms that power AI it will be nearly impossible to remove.

Watch this: This is how biased AI could quickly become a big problem

Examples of AI bias are copious: In July 2015 the Google Photos AI system labeled black people as gorillas; in March 2018 CNET reported that Amazon and Google's smart devices appear to have trouble understanding accents; in October, 2018 Amazon scuttled an AI-powered job recruitment tool that appeared to discriminate against women; in May, 2019 a report from the United Nations Educational, Scientific and Cultural Organization (UNESCO) found that AI personal assistants can reinforce harmful gender stereotypes.

To better understand how AI might be governed, and how to prevent human bias from altering the automated systems we rely on every day, CNET spoke with Salesforce AI experts Kathy Baxter and Richard Socher in San Francisco. Regulating the technology might be challenging, and the process will require nuance, said Baxter.

Watch this: Why regulating AI is essential but might be impossible

The industry is working to develop "trusted AI that is responsible, that it is mindful, and safeguards human rights," she said. "That we make sure [the process] does not infringe on those human rights. It also needs  to be transparent. It has to be able to explain to the end user what is it doing, and give them the opportunity to make informed choices with it."

Salesforce and other tech firms, Baxter said, are developing cross-industry guidance on the criteria for data used in AI data models. "We will show the factors that are used in a model like age, race, gender. And we're going to raise a flag if you're using one of those protected data categories."