Neural networks are getting better accuracy with higher energy and computational cost. After quantization, the cost can be greatly saved, and the quantized models are more hardware friendly with acceptable accuracy loss. On the other hand, recent research has found that neural networks are vulnerable to adversarial attacks, and the robustness of a neural network model can only be improved with defense methods, such as adversarial training. In this work, we find that adversarially-trained neural networks are more vulnerable to quantization loss than plain models. To minimize both the adversarial and the quantization losses simultaneously and to make the quantized model robust, we propose a layer-wise adversarial-aware quantization method, using the Lipschitz constant to choose the best quantization parameter settings for a neural network. We theoretically derive the losses and prove the consistency of our metric selection. The experiment results show that our method can effectively and efficiently improve the robustness of quantized adversarially-trained neural networks.