Unsupervised domain adaptation (UDA) is widely used to transfer a model trained in a labeled source domain to an unlabeled target domain. However, with extensive studies showing deep learning models being vulnerable under adversarial attacks, the adversarial robustness of models in domain adaptation application has largely been overlooked. In this paper, we first conducted an empirical analysis to show that severe inter-class mismatch is the key barrier against achieving a robust model with UDA. Then, we propose a novel approach, Class-consistent Unsupervised Robust Domain Adaptation (CURDA), for robustified unsupervised domain adaptation. With the introduced contrastive robust training and source anchored adversarial contrastive loss, our proposed CURDA is able to effectively conquer the challenge of inter-class mismatch. Experiments on two public benchmarks show that, compared with vanilla UDA, CURDA can significantly improve model robustness in target domains for up to 67.4% costing only 0% to 4.4% of accuracy on the clean data samples. This is one of the first works focusing on the new problem of robustifying unsupervised domain adaptation, which demonstrates that UDA models can be substantially robustified while maintaining competitive accuracy.