Abstract:Graph neural networks (GNNs) have garnered significant attention due to their ability to represent graph data. Among various GNN variants, graph attention network (GAT) stands out since it is able to dynamically learn the importance of different nodes. However, present GATs heavily rely on the smoothed node features to obtain the attention coefficients rather than graph structural information, which fails to provide crucial contextual cues for node representations. To address this issue, this study proposes a neighbor overlay-induced graph attention network (NO-GAT) with the following two-fold ideas: a) learning favorable structural information, i.e., overlaid neighbors, outside the node feature propagation process from an adjacency matrix; b) injecting the information of overlaid neighbors into the node feature propagation process to compute the attention coefficient jointly. Empirical studies on graph benchmark datasets indicate that the proposed NO-GAT consistently outperforms state-of-the-art models.