Abstract:Researches on analyzing graphs with Graph Neural Networks (GNNs) have been receiving more and more attention because of the great expressive power of graphs. GNNs map the adjacency matrix and node features to node representations by message passing through edges on each convolution layer. However, the message passed through GNNs is not always beneficial for all parts in a graph. Specifically, as the data distribution is different over the graph, the receptive field (the farthest nodes that a node can obtain information from) needed to gather information is also different. Existing GNNs treat all parts of the graph uniformly, which makes it difficult to adaptively pass the most informative message for each unique part. To solve this problem, we propose two regularization terms that consider message passing locally: (1) Intra-Energy Reg and (2) Inter-Energy Reg. Through experiments and theoretical discussion, we first show that the speed of smoothing of different parts varies enormously and the topology of each part affects the way of smoothing. With Intra-Energy Reg, we strengthen the message passing within each part, which is beneficial for getting more useful information. With Inter-Energy Reg, we improve the ability of GNNs to distinguish different nodes. With the proposed two regularization terms, GNNs are able to filter the most useful information adaptively, learn more robustly and gain higher expressiveness. Moreover, the proposed LEReg can be easily applied to other GNN models with plug-and-play characteristics. Extensive experiments on several benchmarks verify that GNNs with LEReg outperform or match the state-of-the-art methods. The effectiveness and efficiency are also empirically visualized with elaborate experiments.