Machine Learning Programs are Becoming Biased

Caroline Petrow-Cohen

Did you ever think that it was possible for inanimate objects to be biased?

Recently, scientists have been training machine learning programs to predict drug crime areas. So far, the machines’ predictions have been quite accurate; however, because of the data from previous years the machines have been fed, they have become biased against non-white, low income neighborhoods. They mainly send police officers to these neighborhoods, while several white, high-income neighborhoods also experience much crime.

Because of this, the machines have been offering “skewed results”. For example, in 2010 an AI program that was fed drug crime data from Oakland, California in order to predict areas of future drug crime mainly reported perpetrators from non-caucasian, unprosperous neighborhoods, while public health data of 2011 suggests that the crime was indeed much more ubiquitous.

This map shows estimated crime rates in Oakland (left) versus actual recorded crime rates (right). The actual crime rates throughout the city were much more spread out than predicted.
Factors used to predict crime rates, including gender, race, and drug history.

In addition, a program called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) made incorrect predictions about white and African American defendants. Among the defendants who didn’t commit further crimes in the future, the algorithm had far more often wrongly marked the black defendants as high-risk for future criminal activity than white defendants. Among those who did commit future crimes, whites were much more often incorrectly predicted as low-risk for future crime.

Scientists have suggested that changing the factors that AI’s use to predict crime rates may reduce bias. The effort to decrease machine bias is still in its beginning stages, but hopefully, researchers will soon find a way to decrease most biases that cause inaccuracies in crime-predicting and other programs. “Now, at least, people have started posing solutions, and weighing the various benefits of those ideas, so we’re not freaking out as much,” Dr. Venkatasubramanian optimistically conveys.

 

 

Jessica Yatvitskiy ’21

Sources

Maria Temming, “Fair-Minded Machines”, ScienceNews Magazine, 16 Sept. 2017

 

Leave your thought