Semantic segmentation of high-resolution remote sensing imagery (HRSI) suffers from the domain shift, resulting in poor performance of the model in another unseen domain. Unsupervised domain adaptive (UDA) semantic segmentation aims to adapt the semantic segmentation model trained on the labeled source domain to an unlabeled target domain. However, the existing UDA semantic segmentation models tend to align pixels or features based on statistical information related to labels in source and target domain data, and make predictions accordingly, which leads to uncertainty and fragility of prediction results. In this paper, we propose a causal prototype-inspired contrast adaptation (CPCA) method to explore the invariant causal mechanisms between different HRSIs domains and their semantic labels. It firstly disentangles causal features and bias features from the source and target domain images through a causal feature disentanglement module. Then, a causal prototypical contrast module is used to learn domain invariant causal features. To further de-correlate causal and bias features, a causal intervention module is introduced to intervene on the bias features to generate counterfactual unbiased samples. By forcing the causal features to meet the principles of separability, invariance and intervention, CPCA can simulate the causal factors of source and target domains, and make decisions on the target domain based on the causal features, which can observe improved generalization ability. Extensive experiments under three cross-domain tasks indicate that CPCA is remarkably superior to the state-of-the-art methods.