Machine learning (ML) techniques, especially neural networks (NNs), have shown promise in learning subgrid-scale (SGS) parameterizations for climate modeling. However, a major problem with data-driven parameterizations, particularly those learned with supervised algorithms, is instability when integrated with numerical solvers of large-scale processes. Current remedies are often ad-hoc and lack a theoretical foundation. Here, we combine ML theory and climate physics to address a source of instability in NN-based parameterization. We demonstrate the importance of learning spatially non-local dynamics using a 1D model of the quasi-biennial oscillation (QBO) with gravity wave (GW) parameterization as a testbed. While common offline metrics fail to identify shortcomings in learning non-local dynamics, we show that the receptive field (RF)-the region of the input an NN uses to predict an output-can identify instability a-priori. We find that NN-based parameterizations that seem to accurately predict GW forcings from wind profiles ($\mathbf{R^2 \approx 0.99}$) cause unstable simulations when RF is too small to capture the non-local dynamics, while NNs of the same size but large-enough RF are stable. Some architectures, e.g., Fourier neural operators, have inherently large RF. We also demonstrate that learning non-local dynamics can be crucial for the stability and accuracy of a data-driven spatiotemporal emulator of the entire zonal wind field. Given the ubiquity of non-local dynamics in the climate system, we expect the use of effective RF, which can be computed for any NN architecture, to be important for many applications. This work highlights the need to integrate ML theory with physics for designing/analyzing data-driven algorithms for weather/climate modeling.