Abstract:Ensembles of Deep Neural Networks, Deep Ensembles, are widely used as a simple way to boost predictive performance. However, their impact on algorithmic fairness is not well understood yet. Algorithmic fairness investigates how a model's performance varies across different groups, typically defined by protected attributes such as age, gender, or race. In this work, we investigate the interplay between the performance gains from Deep Ensembles and fairness. Our analysis reveals that they unevenly favor different groups in what we refer to as a disparate benefits effect. We empirically investigate this effect with Deep Ensembles applied to popular facial analysis and medical imaging datasets, where protected group attributes are given and find that it occurs for multiple established group fairness metrics, including statistical parity and equal opportunity. Furthermore, we identify the per-group difference in predictive diversity of ensemble members as the potential cause of the disparate benefits effect. Finally, we evaluate different approaches to reduce unfairness due to the disparate benefits effect. Our findings show that post-processing is an effective method to mitigate this unfairness while preserving the improved performance of Deep Ensembles.
Abstract:Link recommendation algorithms contribute to shaping human relations of billions of users worldwide in social networks. To maximize relevance, they typically propose connecting users that are similar to each other. This has been found to create information silos, exacerbating the isolation suffered by vulnerable salient groups and perpetuating societal stereotypes. To mitigate these limitations, a significant body of work has been devoted to the implementation of fair link recommendation methods. However, most approaches do not question the ultimate goal of link recommendation algorithms, namely the monetization of users' engagement in intricate business models of data trade. This paper advocates for a diversification of players and purposes of social network platforms, aligned with the pursue of social justice. To illustrate this conceptual goal, we present ERA-Link, a novel link recommendation algorithm based on spectral graph theory that counteracts the systemic societal discrimination suffered by vulnerable groups by explicitly implementing affirmative action. We propose four principled evaluation measures, derived from effective resistance, to quantitatively analyze the behavior of the proposed method and compare it to three alternative approaches. Experiments with synthetic and real-world networks illustrate how ERA-Link generates better outcomes according to all evaluation measures, not only for the vulnerable group but for the whole network. In other words, ERA-Link recommends connections that mitigate the structural discrimination of a vulnerable group, improves social cohesion and increases the social capital of all network users. Furthermore, by promoting the access to a diversity of users, ERA-Link facilitates innovation opportunities.
Abstract:In this paper, we propose FairShap, a novel and interpretable pre-processing (re-weighting) method for fair algorithmic decision-making through data valuation. FairShap is based on the Shapley Value, a well-known mathematical framework from game theory to achieve a fair allocation of resources. Our approach is easily interpretable, as it measures the contribution of each training data point to a predefined fairness metric. We empirically validate FairShap on several state-of-the-art datasets of different nature, with different training scenarios and models. The proposed approach outperforms other methods, yielding significantly fairer models with similar levels of accuracy. In addition, we illustrate FairShap's interpretability by means of histograms and latent space visualizations. We believe this work represents a promising direction in interpretable, model-agnostic approaches to algorithmic fairness.