In this paper, we focus on reducing the labeled data size for sentence learning. We argue that real-time uncertainty sampling of active learning is time-consuming, and delayed uncertainty sampling may lead to the ineffective sampling problem. We propose the adversarial uncertainty sampling in discrete space, in which sentences are mapped into the popular pre-trained language model encoding space. Our proposed approach can work in real-time and is more efficient than traditional uncertainty sampling. Experimental results on five datasets show that our proposed approach outperforms strong baselines and can achieve better uncertainty sampling effectiveness with acceptable running time.