Just because artificial intelligence training data might show that doctors are often male, doesn't mean machine learning should assume that's the case.
Stephen Shanklandprincipal writer
Stephen Shankland has been a reporter at CNET since 1998 and writes about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertiseprocessors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, scienceCredentials
I've been covering the technology industry for 24 years and was a science writer for five years before that. I've got deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and other dee
At the company's Google I/O conference Tuesday, Chief Executive Sundar Pichai described research to gain insight into how Google's artificial intelligence algorithms work and make sure they don't "reinforce bias that exists in the world."
Specifically, he described a technology called TCAV (testing with concept activation vectors) that's designed to do things like not assume a doctor is male even if AI training data indicates that's more likely.
"It's not enough to know that an AI model works. We have to know how it works," Pichai said. "Bias has been a concern in science long before machine learning came long. The stakes are clearly higher in AI."
Concept activation vectors make it easier to see the choices an AI algorithm is making, revealing higher-level human-friendly terms, not just low-level characteristics like pixel-level structures in photos.
In one research paper about concept activation vectors, Google researchers showed the technology could identify medical concepts that were relevant to predicting an eye problem called diabetic retinopathy. And it could reveal what's going on inside the mind of the AI, so to speak, so humans could oversee it better. "TCAV may be useful for helping experts interpret and fix model errors when they disagree with model predictions," the researchers concluded.