Have Algorithms Gotten A Bad Rap?

December 19, 2019

Sendhil Mullainathan, a professor of behavioral and computational science at the University of Chicago, is the co-author of two studies that document the extent to which racial bias influences institutional decision-making. In one study, pairs of resumes – identical in every respect except for the first names of the fictional applicants – were submitted to a number of employers that had posted job openings. In one set, the applicant had a first name commonly perceived as “black,” while in the other it was ostensibly nondescript generic American. (The study was titled “Are Emily and Greg more employable than Lakisha and Jamal?” The answer was yes, and by a resounding 50 percent.) The second study looked at an algorithm that is actually used by healthcare systems to identify people who need medical services. It found that the algorithm consistently under-perceived healthcare needs of black people. In this case, the problem was that the algorithm was using the amount of money a person is spending on health care as a measure of need, and black people statistically spend less on healthcare. Once it understood the problem, the entity that was using the algorithm was happy to change it. The author concludes that bias in algorithms, while it can be harmful and pernicious, can in theory – and assuming there is the will to do it and the logic of the algorithm is not hidden – be addressed and corrected. Human bias is a far more intractable problem.

Read full article at:

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top