There’s a lot of unfiltered hype about AI/algorithms/Big Data/(fill in the blank). But along with the positive potential, there are things that need to get addressed quickly before cement starts drying.
One of these things is embedding bias within algorithms that make real-world, important decisions such as who gets hired, who gets pulled aside for extra security screening nd so forth.
At a conference last week in New York City called Fairness, Accountability and Transparency (FAT*), speakers shared research and action programs to identify and eliminate bias in algorithms.
One of the speakers was Joy Buolamwini of MIT who is part of the AJL – the Algorithmic Justice League. She and a colleague Timnit Gebru looked at the facial recognition systems of IBM, Microsoft and Face++ and found that these programs more accurately classified lighter faces than darker ones and classified males faces more accurately than female. IBM’s software got light male faces wrong just 0.3% of the time as compared to 34.7% error rate for dark female faces.