Facebook touts efforts to keep bias out of its AI tools
Now we're working to scale the Fairness Flow to evaluate the personal and societal implications of every product that we build.
As a step in that direction, we've integrated the Fairness Flow into our internal machine learning platform, FBLearner Flow.
This is exciting because it means that any engineer at the company can plug into this technology and then evaluate their algorithms for bias.
Most importantly, it means they don't need to reinvent the wheel.
They can directly draw on best practices from the external community as well as our internal work.
This is still an active area for research.
And so these methods will continue to improve, and adapt as the applications of AI, and the types of protections that we need for AI evolves.
This conversation is necessarily something that involves A diverse set of perspectives.
Even a first step, like building the fairness flow, requires collaborating with external experts.
Technologists can't provide all the answers here, because many of our most important questions sit at the intersection of many disciplines and communities.
Beyond mathematics and computer science, these are social science, ethics, law, and policy questions.
So we can't and we won't work on this in a vacuum.
Not at Facebook and not anywhere.
Another day, another huge Pixel 4XL leak (The Daily Charge, 9/12/2019)
Scott Wiener says California can save the internet
3 new iPhones, a new Apple Watch and an early surprise
Asus ROG Phone 2 has the first 120Hz OLED screen
Google under investigation over its digital ad business
YouTube's machine learning can't keep up with its promises (The...
Jony Ive designed a crazy all-diamond ring
Amazon's revamped Fire TV lineup includes a faster Cube and an...
Police have your Ring footage. They're not the only ones looking...
What YouTube's $170M privacy fine means for kids' channels