Abstract:Learning-based precoding has been shown able to be implemented in real-time, jointly optimized with channel acquisition, and robust to imperfect channels. Yet previous works rarely explain the design choices and learning performance, and existing methods either suffer from high training complexity or depend on problem-specific models. In this paper, we address these issues by analyzing the properties of precoding policy and inductive biases of neural networks, noticing that the learning performance can be decomposed into approximation and estimation errors where the former is related to the smoothness of the policy and both depend on the inductive biases of neural networks. To this end, we introduce a graph neural network (GNN) to learn precoding policy and analyze its connection with the commonly used convolutional neural networks (CNNs). By taking a sum rate maximization precoding policy as an example, we explain why the learned precoding policy performs well in the low signal-to-noise ratio regime, in spatially uncorrelated channels, and when the number of users is much fewer than the number of antennas, as well as why GNN is with higher learning efficiency than CNNs. Extensive simulations validate our analyses and evaluate the generalization ability of the GNN.