Recently, studies on machine learning have focused on methods that use symmetry implicit in a specific manifold as an inductive bias. In particular, approaches using Grassmann manifolds have been found to exhibit effective performance in fields such as point cloud and image set analysis. However, there is a lack of research on the construction of general learning models to learn distributions on the Grassmann manifold. In this paper, we lay the theoretical foundations for learning distributions on the Grassmann manifold via continuous normalizing flows. Experimental results show that the proposed method can generate high-quality samples by capturing the data structure. Further, the proposed method significantly outperformed state-of-the-art methods in terms of log-likelihood or evidence lower bound. The results obtained are expected to usher in further research in this field of study.