Abstract:Graph convolutional networks (GCNs) have emerged as powerful models for graph learning tasks, exhibiting promising performance in various domains. While their empirical success is evident, there is a growing need to understand their essential ability from a theoretical perspective. Existing theoretical research has primarily focused on the analysis of single-layer GCNs, while a comprehensive theoretical exploration of the stability and generalization of deep GCNs remains limited. In this paper, we bridge this gap by delving into the stability and generalization properties of deep GCNs, aiming to provide valuable insights by characterizing rigorously the associated upper bounds. Our theoretical results reveal that the stability and generalization of deep GCNs are influenced by certain key factors, such as the maximum absolute eigenvalue of the graph filter operators and the depth of the network. Our theoretical studies contribute to a deeper understanding of the stability and generalization properties of deep GCNs, potentially paving the way for developing more reliable and well-performing models.
Abstract:In this paper, we explore the approximation theory of functions defined on graphs. Our study builds upon the approximation results derived from the $K$-functional. We establish a theoretical framework to assess the lower bounds of approximation for target functions using Graph Convolutional Networks (GCNs) and examine the over-smoothing phenomenon commonly observed in these networks. Initially, we introduce the concept of a $K$-functional on graphs, establishing its equivalence to the modulus of smoothness. We then analyze a typical type of GCN to demonstrate how the high-frequency energy of the output decays, an indicator of over-smoothing. This analysis provides theoretical insights into the nature of over-smoothing within GCNs. Furthermore, we establish a lower bound for the approximation of target functions by GCNs, which is governed by the modulus of smoothness of these functions. This finding offers a new perspective on the approximation capabilities of GCNs. In our numerical experiments, we analyze several widely applied GCNs and observe the phenomenon of energy decay. These observations corroborate our theoretical results on exponential decay order.
Abstract:Recommendation algorithm plays an important role in recommendation system (RS), which predicts users' interests and preferences for some given items based on their known information. Recently, a recommendation algorithm based on the graph Laplacian regularization was proposed, which treats the prediction problem of the recommendation system as a reconstruction issue of small samples of the graph signal under the same graph model. Such a technique takes into account both known and unknown labeled samples information, thereby obtaining good prediction accuracy. However, when the data size is large, solving the reconstruction model is computationally expensive even with an approximate strategy. In this paper, we propose an equivalent reconstruction model that can be solved exactly with extremely low computational cost. Finally, a final prediction algorithm is proposed. We find in the experiments that the proposed method significantly reduces the computational cost while maintaining a good prediction accuracy.