Abstract:Link prediction for directed graphs is a crucial task with diverse real-world applications. Recent advances in embedding methods and Graph Neural Networks (GNNs) have shown promising improvements. However, these methods often lack a thorough analysis of embedding expressiveness and suffer from ineffective benchmarks for a fair evaluation. In this paper, we propose a unified framework to assess the expressiveness of existing methods, highlighting the impact of dual embeddings and decoder design on performance. To address limitations in current experimental setups, we introduce DirLinkBench, a robust new benchmark with comprehensive coverage and standardized evaluation. The results show that current methods struggle to achieve strong performance on the new benchmark, while DiGAE outperforms others overall. We further revisit DiGAE theoretically, showing its graph convolution aligns with GCN on an undirected bipartite graph. Inspired by these insights, we propose a novel spectral directed graph auto-encoder SDGAE that achieves SOTA results on DirLinkBench. Finally, we analyze key factors influencing directed link prediction and highlight open challenges.
Abstract:Polynomial filters, a kind of Graph Neural Networks, typically use a predetermined polynomial basis and learn the coefficients from the training data. It has been observed that the effectiveness of the model is highly dependent on the property of the polynomial basis. Consequently, two natural and fundamental questions arise: Can we learn a suitable polynomial basis from the training data? Can we determine the optimal polynomial basis for a given graph and node features? In this paper, we propose two spectral GNN models that provide positive answers to the questions posed above. First, inspired by Favard's Theorem, we propose the FavardGNN model, which learns a polynomial basis from the space of all possible orthonormal bases. Second, we examine the supposedly unsolvable definition of optimal polynomial basis from Wang & Zhang (2022) and propose a simple model, OptBasisGNN, which computes the optimal basis for a given graph structure and graph signal. Extensive experiments are conducted to demonstrate the effectiveness of our proposed models.
Abstract:Graph Convolutional Networks (GCNs), which use a message-passing paradigm with stacked convolution layers, are foundational methods for learning graph representations. Recent GCN models use various residual connection techniques to alleviate the model degradation problem such as over-smoothing and gradient vanishing. Existing residual connection techniques, however, fail to make extensive use of underlying graph structure as in the graph spectral domain, which is critical for obtaining satisfactory results on heterophilic graphs. In this paper, we introduce ClenshawGCN, a GNN model that employs the Clenshaw Summation Algorithm to enhance the expressiveness of the GCN model. ClenshawGCN equips the standard GCN model with two straightforward residual modules: the adaptive initial residual connection and the negative second-order residual connection. We show that by adding these two residual modules, ClenshawGCN implicitly simulates a polynomial filter under the Chebyshev basis, giving it at least as much expressive power as polynomial spectral GNNs. In addition, we conduct comprehensive experiments to demonstrate the superiority of our model over spatial and spectral GNN models.