Facebook touts efforts to keep bias out of its AI tools
Tech Industry
Now we're working to scale the Fairness Flow to evaluate the personal and societal implications of every product that we build.
As a step in that direction, we've integrated the Fairness Flow into our internal machine learning platform, FBLearner Flow.
This is exciting because it means that any engineer at the company can plug into this technology and then evaluate their algorithms for bias.
Most importantly, it means they don't need to reinvent the wheel.
They can directly draw on best practices from the external community as well as our internal work.
[BLANK_AUDIO]
This is still an active area for research.
And so these methods will continue to improve, and adapt as the applications of AI, and the types of protections that we need for AI evolves.
This conversation is necessarily something that involves A diverse set of perspectives.
Even a first step, like building the fairness flow, requires collaborating with external experts.
Technologists can't provide all the answers here, because many of our most important questions sit at the intersection of many disciplines and communities.
Beyond mathematics and computer science, these are social science, ethics, law, and policy questions.
So we can't and we won't work on this in a vacuum.
Not at Facebook and not anywhere.