Abstract:Statistical parity metrics have been widely studied and endorsed in the AI community as a means of achieving fairness, but they suffer from at least two weaknesses. They disregard the actual welfare consequences of decisions and may therefore fail to achieve the kind of fairness that is desired for disadvantaged groups. In addition, they are often incompatible with each other, and there is no convincing justification for selecting one rather than another. This paper explores whether a broader conception of social justice, based on optimizing a social welfare function (SWF), can be useful for assessing various definitions of parity. We focus on the well-known alpha fairness SWF, which has been defended by axiomatic and bargaining arguments over a period of 70 years. We analyze the optimal solution and show that it can justify demographic parity or equalized odds under certain conditions, but frequently requires a departure from these types of parity. In addition, we find that predictive rate parity is of limited usefulness. These results suggest that optimization theory can shed light on the intensely discussed question of how to achieve group fairness in AI.
Abstract:We propose optimization as a general paradigm for formalizing fairness in AI-based decision models. We argue that optimization models allow formulation of a wide range of fairness criteria as social welfare functions, while enabling AI to take advantage of highly advanced solution technology. We show how optimization models can assist fairness-oriented decision making in the context of neural networks, support vector machines, and rule-based systems by maximizing a social welfare function subject to appropriate constraints. In particular, we state tractable optimization models for a variety of functions that measure fairness or a combination of fairness and efficiency. These include several inequality metrics, Rawlsian criteria, the McLoone and Hoover indices, alpha fairness, the Nash and Kalai-Smorodinsky bargaining solutions, combinations of Rawlsian and utilitarian criteria, and statistical bias measures. All of these models can be efficiently solved by linear programming, mixed integer/linear programming, or (in two cases) specialized convex programming methods.
Abstract:Optimization models generally aim for efficiency by maximizing total benefit or minimizing cost. Yet a trade-off between fairness and efficiency is an important element of many practical decisions. We propose a principled and practical method for balancing these two criteria in an optimization model. Following a critical assessment of existing schemes, we define a set of social welfare functions (SWFs) that combine Rawlsian leximax fairness and utilitarianism and overcome some of the weaknesses of previous approaches. In particular, we regulate the equity/efficiency trade-off with a single parameter that has a meaningful interpretation in practical contexts. We formulate the SWFs using mixed integer constraints and sequentially maximize them subject to constraints that define the problem at hand. After providing practical step-by-step instructions for implementation, we demonstrate the method on problems of realistic size involving healthcare resource allocation and disaster preparation. The solution times are modest, ranging from a fraction of a second to 18 seconds for a given value of the trade-off parameter.
Abstract:Concepts of consistency have long played a key role in constraint programming but never developed in integer programming (IP). Consistency nonetheless plays a role in IP as well. For example, cutting planes can reduce backtracking by achieving various forms of consistency as well as by tightening the linear programming (LP) relaxation. We introduce a type of consistency that is particularly suited for 0-1 programming and develop the associated theory. We define a 0-1 constraint set as LP-consistent when any partial assignment that is consistent with its linear programming relaxation is consistent with the original 0-1 constraint set. We prove basic properties of LP-consistency, including its relationship with Chvatal-Gomory cuts and the integer hull. We show that a weak form of LP-consistency can reduce or eliminate backtracking in a way analogous to k-consistency but is easier to achieve. In so doing, we identify a class of valid inequalities that can be more effective than traditional cutting planes at cutting off infeasible 0-1 partial assignments.