Abstract:Existing multi-label frameworks only exploit the information deduced from the bipartition of the labels into a positive and negative set. Therefore, they do not benefit from the ranking order between positive labels, which is the concept we introduce in this paper. We propose a novel multi-label ranking method: GaussianMLR, which aims to learn implicit class significance values that determine the positive label ranks instead of treating them as of equal importance, by following an approach that unifies ranking and classification tasks associated with multi-label ranking. Due to the scarcity of public datasets, we introduce eight synthetic datasets generated under varying importance factors to provide an enriched and controllable experimental environment for this study. On both real-world and synthetic datasets, we carry out extensive comparisons with relevant baselines and evaluate the performance on both of the two sub-tasks. We show that our method is able to accurately learn a representation of the incorporated positive rank order, which is not only consistent with the ground truth but also proportional to the underlying information. We strengthen our claims empirically by conducting comprehensive experimental studies. Code is available at https://github.com/MrGranddy/GaussianMLR.
Abstract:Multi-label ranking maps instances to a ranked set of predicted labels from multiple possible classes. The ranking approach for multi-label learning problems received attention for its success in multi-label classification, with one of the well-known approaches being pairwise label ranking. However, most existing methods assume that only partial information about the preference relation is known, which is inferred from the partition of labels into a positive and negative set, then treat labels with equal importance. In this paper, we focus on the unique challenge of ranking when the order of the true label set is provided. We propose a novel dedicated loss function to optimize models by incorporating penalties for incorrectly ranked pairs, and make use of the ranking information present in the input. Our method achieves the best reported performance measures on both synthetic and real world ranked datasets and shows improvements on overall ranking of labels. Our experimental results demonstrate that our approach is generalizable to a variety of multi-label classification and ranking tasks, while revealing a calibration towards a certain ranking ordering.