Review: Delayed impact of fair machine learning

In this post I will discuss one of the two best papers in ICML 2018 – Delayed Impact of Fair Machine Learning. Contrary to other papers constructing various innovative definitions of fairness, this paper analyzes the delayed impact of fairness policy. It shows that these policies do not necessarily improve the situation of the disadvantaged population: It may hurt them, in some cases, in the long run.

In this review, I will help the readers understand the main idea of this paper. In particular, I will analyze the soundness of its arguments as well as its applicability to the real life. My analysis shows that two critical assumptions in this paper are either incoherent or invalid, which make the paper irrelevant.

Contents

  • Paper idea: fairness policies can ruin the reputation
  • The 1st assumption is incoherent
  • The 2nd assumption is invalid
  • What is the point of fairness?
  • What will be the future of fair machine learning?

Paper idea: fairness policies can ruin the reputation

Let us consider a simple example: The bank gives loans to two populations A and B, where A has worse scores than B. Since the bank wants to maximize its utility, it will more likely give loans to B than to A, which brings some discrimination issues. Therefore, the government wants to impose fairness policies to help the disadvantaged population A.

Besides the previous two concepts – score and utility, this paper introduces a third concept – (population) reputation. In the paper, they call it well-being, and it is often represented as the average score of the population. The paper shows that, while the fairness policies help the disadvantaged population A get more loans, A’s reputation can be ruined, which hurts themselves in the long run.

A picture is worth a thousand words; the following outcome curve is provided by the authors and slightly altered by me (e.g., average score -> reputation). It shows that, by increasing the selection rate, the reputation change first increases (more repayments than defaults) and then decreases (more defaults than repayments). outcome curve

Now let us imagine that this curve represents the disadvantaged population A. To maximize the utility, the bank will choose the point a, which also increases A’s reputation: it is a win-win. However, in order to comply with the government’s fairness policy, the bank has to increase A’s selection rate.

If the selection rate reaches the point b, A’s reputation will be maximized, in which case the bank trades its utility for A’s reputation; the bank wins less, but A wins more. However, if the selection rate further increases, A’s reputation change will decline: The bank trades its utility for nothing; the bank wins less, and A wins less. If the selection rate further increases beyond the point d, A’s reputation will actually be hurt: it is a lose-lose.

Therefore, aggressive fairness policies can do actual harm to the disadvantaged population. Without fairness policies, A’s reputation grows naturally; with fairness policies, A’s reputation can grow less fast or even decline. The resulted inefficiency of reputation building can make A’s loan application harder in the future.

Although this idea is intuitive and self-explainable, the paper has attempted but fails to explain how the reputation change can benefit or harm the population. That is, the authors adopted a model whose delayed “impact” has actually no impact. This failure is due to two inappropriate assumptions in its approach.

The 1st assumption is incoherent

The 1st assumption, appearing at the upper-right corner of Page 3 and quite easy to miss, claims

… the success of an individual is independent of their group given the score; that is, the score summarizes all relevant information about the success event, so there exists a function $\rho: \mathcal{X} \rightarrow [0, 1]$ such that individuals of score $x$ succeed with probability $\rho(x)$.

If you find it hard to understand, I will illustrate it with the language of the graphical model, as shown in the figure below. Here, the word “group” is a synonym of “population”, and the word “repay” is a synonym of “success”. If we can observe the score, we can ignore the variable “group” since it brings no other relevant information than contained in the variable “score”. Graphic model of Assumption 1

The existence of $\rho(\cdot)$ allows the authors to quantify the utility and reputation change, which further enables them to compute the previous outcome curve, from which all the useful results follow. We can say, this assumption is the fundamental one, without which this paper collapses. Nevertheless, the authors might not notice that it is precisely the same assumption that makes their results irrelevant.

In fact, the reputation, as an aggregated statistic, will be fully captured in the group variable and thus screened out by the score variable. With the assumption, the reputation can never play a role in the bank’s decision making. Without an effective way to translate the (population) reputation change to the population score change, the population will never be able to benefit from the increased reputation.

But if we do allow the reputation to influence the score, say, by defining the reputation as the average score and increasing the individual scores by the average score change, we violate the assumption that the score has already contained all relevant information from the group; this is what I called an incoherent assumption. Indeed, the model is static and without a time scale; that is why the paper cannot go beyond one single round.

That said, if we do introduce a time scale and manage to design some convoluted dynamics allowing the reputation to influence the score, can we reproduce the same conclusion as in this paper? The answer to this question is no: The invalidity of the 2nd assumption analyzed in the next section makes the paper unsavable.

The 2nd assumption is invalid

The 2nd assumption, at the upper-left corner of Page 5, assumes that the institution’s individual utility function is more stringent than the individual reputation change, i.e., positive utility implies positive reputation change.

The authors made this assumption, suggesting that the egoist bank cares more about its own utility than the reputation of the individuals. With this assumption, the authors managed to prove that the utility-maximizing policy is suboptimal for the population. That is, the bank can further improve the population’s reputation by slightly increasing the selection rate.

Again, the authors avoided discussing how this reputation increase will benefit the population in question. In fact, it is impossible for them to answer this question. Once the reputation increase is translated to a score increase, the utility-maximizing selection rate will also increase consequently, which again yields a positive reputation change. In other words, the utility-maximizing selection rate (as well as other reasonable selection rates) is a monotone increasing function of the time, which is ridiculous.

The bug here is the invalid assumption, which should, instead, be reworded inversely. That is, for any reasonable reputation definition, the institution’s individual reputation change is more stringent than the individual utility function, i.e., positive reputation change implies positive utility. Under the new assumption, even when the utility is positive (but not satisfying), the reputation change can still be negative.

To understand the reworded version, you do not need a Ph.D. degree; it can simply be explained by life experience. You need to collaborate with the same person over and over to build the trust gradually even when every collaboration yields a positive result, but one betrayal can quickly destroy the trust. Conversely, the surest way to avoid betrayal is to avoid any social relation: Since you do not have any interpersonal interaction (zero utility), you cannot be disappointed (zero reputation change).

Under the reworded assumption, the utility-maximizing selection rate is always higher than the altruistic one. In other words, if the bank wants to maximize the population reputation change, it will have to reduce the selection rate. We arrive at just the opposite conclusion to the paper. It may seem counter-intuitive at first glance but is more than natural once thought about: You need to be more cautious to avoid being disappointed even though a slightly aggressive collaboration may yield higher utility.

Since the original assumption in the paper is invalid, all its resulting conclusions will no longer apply. The utility-maximizing policy is already the optimal win-win solution.

What is the point of fairness?

Since the utility-maximizing policy is already optimal, why should we still impose the fairness? The reason is neither maximizing the utility nor maximizing the reputation change; we impose the fairness even when we know we are sure to lose money and hurt the reputation of the population in question. We do it because we want social stability and the possibility of change.

  • Social stability: If we leave the disadvantaged population as is, their situation may worsen. Moreover, they may commit crimes and form mafias, which makes the society dangerous.

  • Possibility: Someone is born within the disadvantaged population, but it does not mean he cannot become a great person. Although the probability is low, it is not zero. We expect someone like Martin Luther King, who fights for the peace and love. We want to give hope to them.

Fairness is selflessness, love, mercy, and forgiveness to betrayal. It is a noble act, not some hypocritical charity or cheap compassion. In this sense, that paper totally misses the point.

What will be the future of fair machine learning?

What will the future of fair machine learning be? Coping with the prejudice in the data, I think, will be a promising alternative direction.

Data do not always tell the truth; in fact, it tells lies more often than the truth. If a bank manages to filter out the prejudice in the data and learns the true score, it will get the upper hand in the industry. Indeed, while the other banks are hesitating to give loans to the population in question, that bank already knows they are actually low-risk population and starts making profits from them. Besides making money, this bank is also being much fairer than the other banks.

An excellent tool for this direction will be the bandit. This approach, I predict, will be the research interest of the academia in the next few years.

Written on August 14, 2018