Abstract:The structure of data organization is widely recognized as having a substantial influence on the efficacy of machine learning algorithms, particularly in binary classification tasks. Our research provides a theoretical framework suggesting that the maximum potential of binary classifiers on a given dataset is primarily constrained by the inherent qualities of the data. Through both theoretical reasoning and empirical examination, we employed standard objective functions, evaluative metrics, and binary classifiers to arrive at two principal conclusions. Firstly, we show that the theoretical upper bound of binary classification performance on actual datasets can be theoretically attained. This upper boundary represents a calculable equilibrium between the learning loss and the metric of evaluation. Secondly, we have computed the precise upper bounds for three commonly used evaluation metrics, uncovering a fundamental uniformity with our overarching thesis: the upper bound is intricately linked to the dataset's characteristics, independent of the classifier in use. Additionally, our subsequent analysis uncovers a detailed relationship between the upper limit of performance and the level of class overlap within the binary classification data. This relationship is instrumental for pinpointing the most effective feature subsets for use in feature engineering.
Abstract:Link and sign prediction in complex networks bring great help to decision-making and recommender systems, such as in predicting potential relationships or relative status levels. Many previous studies focused on designing the special algorithms to perform either link prediction or sign prediction. In this work, we propose an effective model integration algorithm consisting of network embedding, network feature engineering, and an integrated classifier, which can perform the link and sign prediction in the same framework. Network embedding can accurately represent the characteristics of topological structures and cooperate with the powerful network feature engineering and integrated classifier can achieve better prediction. Experiments on several datasets show that the proposed model can achieve state-of-the-art or competitive performance for both link and sign prediction in spite of its generality. Interestingly, we find that using only very low network embedding dimension can generate high prediction performance, which can significantly reduce the computational overhead during training and prediction. This study offers a powerful methodology for multi-task prediction in complex networks.