Fairness can perpetuate discrimination

In the last century insurances had not sophisticated algorithms like today. The original idea of risk spreading and the principle of solidarity was based on the notion that sharing risk bound people together, encouraging a spirit of mutual aid and interdependence.

This started to change in the last decades, around the 70s, when this vision had given way to the so-called actuarial fairness. Living in a poor or minority-dense neighbourhood  was going to cost you more in insurance, denied loans and so on. Soon the insurances were accused of justifying discrimination to which they replied that were just doing their job, a purely technical job, and that it didn’t involve moral judgments.  Effects on society were really not their problem or business.

Sounds familiar? It’s the same arguments made by social network platforms today: they are technical platforms running algorithms and are not involved in judging the content. 

Civil rights activists lost their battles with the insurance industry because they insisted on arguing about the accuracy of certain statistics or the validity of certain classifications rather than questioning whether actuarial fairness was a valid way in the first place.

There are several obvious problems with it. If you believe the risk scores are accurate in predicting the future outcomes of a certain group of people, then it means it’s “fair” that a person is more likely to spend time in jail simply because they are black. 

The other problem is that there are fewer arrests in rich neighbourhoods, not because they commit less crimes but because there is less policing. One is more likely to be rearrested if one lives in an over-policed neighbourhood and that creates a feedback loop—more arrests mean higher recidivism rates. 

Over-policing and predictive policing may be “accurate” in the short term but the long-term effects on communities have been shown to be negative, creating self-fulfilling prophecies.

Like the insurers, large tech firms and the computer science community also tend to frame “fairness” in a de-politicised way involving only mathematics and code.

The problem is that fairness cannot be reduced to a simple self-contained mathematical definition—fairness is dynamic and social and not a statistical issue. It can never be fully achieved and must be constantly audited, adapted and debated in a democracy. By merely relying on historical data and current definitions of fairness, we will lock in the accumulated unfairnesses of the past and our algorithms and the products they support will always trail the norms, reflecting past norms rather than future ideals and slowing social progress rather than supporting it.

This is a summary of the inspiring idea from Joi Ito, that you can read in full in his article on Wired.

Advertisements