Deployment of machine learning models in real high-risk settings (e.g. healthcare) often depends not only on model's accuracy but also on its fairness, robustness and interpretability. Generalized Additive Models (GAMs) have a long history of use in these high-risk domains, but lack desirable features of deep learning such as differentiability and scalability. In this work, we propose a neural GAM (NODE-GAM) and neural GA$^2$M (NODE-GA$^2$M) that scale well to large datasets, while remaining interpretable and accurate. We show that our proposed models have comparable accuracy to other non-interpretable models, and outperform other GAMs on large datasets. We also show that our models are more accurate in self-supervised learning setting when access to labeled data is limited.