This is how biased AI could quickly become a big problem
A lot of people have trouble understanding how a technology in general could make bias decisions and one of the examples I like to give is, just a simple pair of scissors.
If you have ever reached for a pair of scissors and not really thought about it You're probably right handed because the majority of the population are right handed.
But if you're left handed you may be hesitant to use a pair of scissors because they're probably not going to work for you.
There a lot of different ways you can Pick this bicep from training data.
You can have beauty contest classification, like is this woman beautiful or not?
Based on some past examples of train data.
Now if there's not a black woman has historically won this beauty contest, the beauty classifier will not say black women could be beautiful, which is obviously not true.
So it's The very tricky and subtle issue that you need to be aware of.
The bias tends to come in through the training data.
So if for example, you are using things like age, race, gender in your model There could be historical factors that impact how your model is now going to be trained.
That could be, for example, in loans in a bank.
Historically, people of color have been approved at a lower rate Then Caucasians.
And so if you were using race as one of the factors in your model, you might be introducing your historic bias into your future decisions.
There's unfortunately no silver bullet to fix all [UNKNOWN].
You need to be very careful about the training data that you have.
And even in medical applications where there could be very positive impact.
You need to be careful about having the same kind of.
Distributions for different ethnicities, different genders, different age groups and so on, if you want to apply AI to medicine.
We talk about creating trusted AI that is responsible, that it is mindful and safe guards human rights.
That we make sure that what we are doing does not infringe on those human rights.
It also needs to be transparent.
It has to be able to explain to the end user what is it doing, and give them the opportunity to make informed choices with it.
So one example of that in our case is that we will show the factors that are used in a model like age, race, gender, and we're gonna raise a flag if you're using one of those protected data categories.
And say you're using race that could be adding bias in your model, or because of the US history of redlining.
We know that zip code can be correlated with race.
And so we'll raise a flag and say hey, you're using zip code.
That can be a proxy for race.
Again, you may be introducing bias in your model.
Then you can uncheck that, rerun the analysis, and see how it changes.
So being able to communicate to our users How our A.I.
is working so that they can use it responsibly.
What would it take for you to reveal your data to save others?
First take: WatchOS 7 public beta
Now What: How to plan for the next six months of remote work
TikTok ban: What you need to know
How Black Girls Code is driving change in the tech industry
CISA director: Paper record key to keeping 2020 election secure
Blackhat 2020: Tech community must help secure elections