Abstract:Consider a multi-class labelling problem, where the labels can take values in $[k]$, and a predictor predicts a distribution over the labels. In this work, we study the following foundational question: Are there notions of multi-class calibration that give strong guarantees of meaningful predictions and can be achieved in time and sample complexities polynomial in $k$? Prior notions of calibration exhibit a tradeoff between computational efficiency and expressivity: they either suffer from having sample complexity exponential in $k$, or needing to solve computationally intractable problems, or give rather weak guarantees. Our main contribution is a notion of calibration that achieves all these desiderata: we formulate a robust notion of projected smooth calibration for multi-class predictions, and give new recalibration algorithms for efficiently calibrating predictors under this definition with complexity polynomial in $k$. Projected smooth calibration gives strong guarantees for all downstream decision makers who want to use the predictor for binary classification problems of the form: does the label belong to a subset $T \subseteq [k]$: e.g. is this an image of an animal? It ensures that the probabilities predicted by summing the probabilities assigned to labels in $T$ are close to some perfectly calibrated binary predictor for that task. We also show that natural strengthenings of our definition are computationally hard to achieve: they run into information theoretic barriers or computational intractability. Underlying both our upper and lower bounds is a tight connection that we prove between multi-class calibration and the well-studied problem of agnostic learning in the (standard) binary prediction setting.
Abstract:ML-based predictions are used to inform consequential decisions about individuals. How should we use predictions (e.g., risk of heart attack) to inform downstream binary classification decisions (e.g., undergoing a medical procedure)? When the risk estimates are perfectly calibrated, the answer is well understood: a classification problem's cost structure induces an optimal treatment threshold $j^{\star}$. In practice, however, some amount of miscalibration is unavoidable, raising a fundamental question: how should one use potentially miscalibrated predictions to inform binary decisions? We formalize a natural (distribution-free) solution concept: given anticipated miscalibration of $\alpha$, we propose using the threshold $j$ that minimizes the worst-case regret over all $\alpha$-miscalibrated predictors, where the regret is the difference in clinical utility between using the threshold in question and using the optimal threshold in hindsight. We provide closed form expressions for $j$ when miscalibration is measured using both expected and maximum calibration error, which reveal that it indeed differs from $j^{\star}$ (the optimal threshold under perfect calibration). We validate our theoretical findings on real data, demonstrating that there are natural cases in which making decisions using $j$ improves the clinical utility.
Abstract:In many machine learning settings there is an inherent tension between fairness and accuracy desiderata. How should one proceed in light of such trade-offs? In this work we introduce and study $\gamma$-disqualification, a new framework for reasoning about fairness-accuracy tradeoffs w.r.t a benchmark class $H$ in the context of supervised learning. Our requirement stipulates that a classifier should be disqualified if it is possible to improve its fairness by switching to another classifier from $H$ without paying "too much" in accuracy. The notion of "too much" is quantified via a parameter $\gamma$ that serves as a vehicle for specifying acceptable tradeoffs between accuracy and fairness, in a way that is independent from the specific metrics used to quantify fairness and accuracy in a given task. Towards this objective, we establish principled translations between units of accuracy and units of (un)fairness for different accuracy measures. We show $\gamma$-disqualification can be used to easily compare different learning strategies in terms of how they trade-off fairness and accuracy, and we give an efficient reduction from the problem of finding the optimal classifier that satisfies our requirement to the problem of approximating the Pareto frontier of $H$.
Abstract:Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities" -- what is the probability of 5-year survival after cancer diagnosis? -- and which increasingly form the basis for life-altering decisions. Drawing on an understanding of computational indistinguishability developed in complexity theory and cryptography, we introduce Outcome Indistinguishability. Predictors that are Outcome Indistinguishable yield a generative model for outcomes that cannot be efficiently refuted on the basis of the real-life observations produced by Nature. We investigate a hierarchy of Outcome Indistinguishability definitions, whose stringency increases with the degree to which distinguishers may access the predictor in question. Our findings reveal that Outcome Indistinguishability behaves qualitatively differently than previously studied notions of indistinguishability. First, we provide constructions at all levels of the hierarchy. Then, leveraging recently-developed machinery for proving average-case fine-grained hardness, we obtain lower bounds on the complexity of the more stringent forms of Outcome Indistinguishability. This hardness result provides the first scientific grounds for the political argument that, when inspecting algorithmic risk prediction instruments, auditors should be granted oracle access to the algorithm, not simply historical predictions.
Abstract:It is well understood that classification algorithms, for example, for deciding on loan applications, cannot be evaluated for fairness without taking context into account. We examine what can be learned from a fairness oracle equipped with an underlying understanding of ``true'' fairness. The oracle takes as input a (context, classifier) pair satisfying an arbitrary fairness definition, and accepts or rejects the pair according to whether the classifier satisfies the underlying fairness truth. Our principal conceptual result is an extraction procedure that learns the underlying truth; moreover, the procedure can learn an approximation to this truth given access to a weak form of the oracle. Since every ``truly fair'' classifier induces a coarse metric, in which those receiving the same decision are at distance zero from one another and those receiving different decisions are at distance one, this extraction process provides the basis for ensuring a rough form of metric fairness, also known as individual fairness. Our principal technical result is a higher fidelity extractor under a mild technical constraint on the weak oracle's conception of fairness. Our framework permits the scenario in which many classifiers, with differing outcomes, may all be considered fair. Our results have implications for interpretablity -- a highly desired but poorly defined property of classification systems that endeavors to permit a human arbiter to reject classifiers deemed to be ``unfair'' or illegitimately derived.
Abstract:As algorithms are increasingly used to make important decisions pertaining to individuals, algorithmic discrimination is becoming a prominent concern. The seminal work of Dwork et al. [ITCS 2012] introduced the notion of individual fairness (IF): given a task-specific similarity metric, every pair of similar individuals should receive similar outcomes. In this work, we study fairness when individuals have diverse preferences over the possible outcomes. We show that in such settings, individual fairness can be too restrictive: requiring individual fairness can lead to less-preferred outcomes for the very individuals that IF aims to protect (e.g. a protected minority group). We introduce and study a new notion of preference-informed individual fairness (PIIF), a relaxation of individual fairness that allows for outcomes that deviate from IF, provided the deviations are in line with individuals' preferences. We show that PIIF can allow for solutions that are considerably more beneficial to individuals than the best IF solution. We further show how to efficiently optimize any convex objective over the outcomes subject to PIIF, for a rich class of individual preferences. Motivated by fairness concerns in targeted advertising, we apply this new fairness notion to the multiple-task setting introduced by Dwork and Ilvento [ITCS 2019]. We show that, in this setting too, PIIF can allow for considerably more beneficial solutions, and we extend our efficient optimization algorithm to this setting.
Abstract:The seminal work of Dwork {\em et al.} [ITCS 2012] introduced a metric-based notion of individual fairness. Given a task-specific similarity metric, their notion required that every pair of similar individuals should be treated similarly. In the context of machine learning, however, individual fairness does not generalize from a training set to the underlying population. We show that this can lead to computational intractability even for simple fair-learning tasks. With this motivation in mind, we introduce and study a relaxed notion of {\em approximate metric-fairness}: for a random pair of individuals sampled from the population, with all but a small probability of error, if they are similar then they should be treated similarly. We formalize the goal of achieving approximate metric-fairness simultaneously with best-possible accuracy as Probably Approximately Correct and Fair (PACF) Learning. We show that approximate metric-fairness {\em does} generalize, and leverage these generalization guarantees to construct polynomial-time PACF learning algorithms for the classes of linear and logistic predictors.
Abstract:As algorithms increasingly inform and influence decisions made about individuals, it becomes increasingly important to address concerns that these algorithms might be discriminatory. The output of an algorithm can be discriminatory for many reasons, most notably: (1) the data used to train the algorithm might be biased (in various ways) to favor certain populations over others; (2) the analysis of this training data might inadvertently or maliciously introduce biases that are not borne out in the data. This work focuses on the latter concern. We develop and study multicalbration -- a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data. Multicalibration guarantees accurate (calibrated) predictions for every subpopulation that can be identified within a specified class of computations. We think of the class as being quite rich; in particular, it can contain many overlapping subgroups of a protected group. We show that in many settings this strong notion of protection from discrimination is both attainable and aligned with the goal of obtaining accurate predictions. Along the way, we present new algorithms for learning a multicalibrated predictor, study the computational complexity of this task, and draw new connections to computational learning models such as agnostic learning.
Abstract:We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS 2012], which assumes the existence of a metric that measures similarity between pairs of individuals. Unlike previous works on metric-based fairness, we do not assume that the entire metric is known to the learning algorithm. Instead, we study the setting where a learning algorithm can query this metric a bounded number of times to ascertain similarities between particular pairs of individuals. For example, the queries might be answered by a panel of specialists spanning social scientists, statisticians, demographers, and ethicists. We propose "metric multifairness," a new definition of fairness that is parameterized by a similarity metric $\delta$ on pairs of individuals and a collection ${\cal C}$ of"comparison sets" over pairs of individuals. One way to view this collection is as the family of comparisons that can be expressed within some computational bound. With this interpretation, metric multifairnesss loosely guarantees that similar subpopulations are treated similarly, as long as these subpopulations can be identified within this bound. In particular, metric multifairness implies that a rich class of subpopulations are protected from a multitude of discriminatory behaviors. We provide a general-purpose framework for learning a metric multifair hypothesis that achieves near-optimal loss from a small number of random samples from the metric $\delta$. We study the sample complexity and time complexity of learning a metric multifair hypothesis (providing rather tight upper and lower bounds) by connecting it to the task of learning the class ${\cal C}$. In particular, if the class ${\cal C}$ admits an efficient agnostic learner, then we can learn such a metric multifair hypothesis efficiently.
Abstract:This work considers computationally efficient privacy-preserving data release. We study the task of analyzing a database containing sensitive information about individual participants. Given a set of statistical queries on the data, we want to release approximate answers to the queries while also guaranteeing differential privacy---protecting each participant's sensitive data. Our focus is on computationally efficient data release algorithms; we seek algorithms whose running time is polynomial, or at least sub-exponential, in the data dimensionality. Our primary contribution is a computationally efficient reduction from differentially private data release for a class of counting queries, to learning thresholded sums of predicates from a related class. We instantiate this general reduction with a variety of algorithms for learning thresholds. These instantiations yield several new results for differentially private data release. As two examples, taking {0,1}^d to be the data domain (of dimension d), we obtain differentially private algorithms for: (*) Releasing all k-way conjunctions. For any given k, the resulting data release algorithm has bounded error as long as the database is of size at least d^{O(\sqrt{k\log(k\log d)})}. The running time is polynomial in the database size. (*) Releasing a (1-\gamma)-fraction of all parity queries. For any \gamma \geq \poly(1/d), the algorithm has bounded error as long as the database is of size at least \poly(d). The running time is polynomial in the database size. Several other instantiations yield further results for privacy-preserving data release. Of the two results highlighted above, the first learning algorithm uses techniques for representing thresholded sums of predicates as low-degree polynomial threshold functions. The second learning algorithm is based on Jackson's Harmonic Sieve algorithm [Jackson 1997].