In this paper, we propose a new challenging task named as \textbf{partial multi-view few-shot learning}, which unifies two tasks, i.e. few-shot learning and partial multi-view learning, together. Different from the traditional few-shot learning, this task aims to solve the few-shot learning problem given the incomplete multi-view prior knowledge, which conforms more with the real-world applications. However, this brings about two difficulties within this task. First, the gaps among different views can be large and hard to reduce, especially with sample scarcity. Second, due to the incomplete view information, few-shot learning becomes more challenging than the traditional one. To deal with the above issues, we propose a new \textbf{Meta-alignment and Context Gated-aggregation Network} by equipping meta-alignment and context gated-aggregation with partial multi-view GNNs. Specifically, the meta-alignment effectively maps the features from different views into a more compact latent space, thereby reducing the view gaps. Moreover, the context gated-aggregation alleviates the view-missing influence by leveraging the cross-view context. Extensive experiments are conducted on the PIE and ORL dataset for evaluating our proposed method. By comparing with other few-shot learning methods, our method obtains the state-of-the-art performance especially with heavily-missing views.