Subgraph recognition aims at discovering a compressed substructure of a graph that is most informative to the graph property. It can be formulated by optimizing Graph Information Bottleneck (GIB) with a mutual information estimator. However, GIB suffers from training instability since the mutual information of graph data is intrinsically difficult to estimate. This paper introduces a noise injection method to compress the information in the subgraphs, which leads to a novel Variational Graph Information Bottleneck (VGIB) framework. VGIB allows a tractable variational approximation to its objective under mild assumptions. Therefore, VGIB enjoys more stable and efficient training process - we find that VGIB converges 10 times faster than GIB with improved performances in practice. Extensive experiments on graph interpretation, explainability of Graph Neural Networks, and graph classification show that VGIB finds better subgraphs than existing methods.