Abstract:User-side group fairness is crucial for modern recommender systems, as it aims to alleviate performance disparity between groups of users defined by sensitive attributes such as gender, race, or age. We find that the disparity tends to persist or even increase over time. This calls for effective ways to address user-side fairness in a dynamic environment, which has been infrequently explored in the literature. However, fairness-constrained re-ranking, a typical method to ensure user-side fairness (i.e., reducing performance disparity), faces two fundamental challenges in the dynamic setting: (1) non-differentiability of the ranking-based fairness constraint, which hinders the end-to-end training paradigm, and (2) time-inefficiency, which impedes quick adaptation to changes in user preferences. In this paper, we propose FAir Dynamic rEcommender (FADE), an end-to-end framework with fine-tuning strategy to dynamically alleviate performance disparity. To tackle the above challenges, FADE uses a novel fairness loss designed to be differentiable and lightweight to fine-tune model parameters to ensure both user-side fairness and high-quality recommendations. Via extensive experiments on the real-world dataset, we empirically demonstrate that FADE effectively and efficiently reduces performance disparity, and furthermore, FADE improves overall recommendation quality over time compared to not using any new data.
Abstract:Class imbalance is prevalent in real-world node classification tasks and often biases graph learning models toward majority classes. Most existing studies root from a node-centric perspective and aim to address the class imbalance in training data by node/class-wise reweighting or resampling. In this paper, we approach the source of the class-imbalance bias from an under-explored topology-centric perspective. Our investigation reveals that beyond the inherently skewed training class distribution, the graph topology also plays an important role in the formation of predictive bias: we identify two fundamental challenges, namely ambivalent and distant message-passing, that can exacerbate the bias by aggravating majority-class over-generalization and minority-class misclassification. In light of these findings, we devise a lightweight topological augmentation method ToBA to dynamically rectify the nodes influenced by ambivalent/distant message-passing during graph learning, so as to mitigate the class-imbalance bias. We highlight that ToBA is a model-agnostic, efficient, and versatile solution that can be seamlessly combined with and further boost other imbalance-handling techniques. Systematic experiments validate the superior performance of ToBA in both promoting imbalanced node classification and mitigating the prediction bias between different classes.