Detecting out-of-distribution (OOD) inputs is a principal task for ensuring the safety of deploying deep-neural-network classifiers in open-world scenarios. OOD samples can be drawn from arbitrary distributions and exhibit deviations from in-distribution (ID) data in various dimensions, such as foreground semantic features (e.g., vehicle images vs. ID samples in fruit classification) and background domain features (e.g., textural images vs. ID samples in object recognition). Existing methods focus on detecting OOD samples based on the semantic features, while neglecting the other dimensions such as the domain features. This paper considers the importance of the domain features in OOD detection and proposes to leverage them to enhance the semantic-feature-based OOD detection methods. To this end, we propose a novel generic framework that can learn the domain features from the ID training samples by a dense prediction approach, with which different existing semantic-feature-based OOD detection methods can be seamlessly combined to jointly learn the in-distribution features from both the semantic and domain dimensions. Extensive experiments show that our approach 1) can substantially enhance the performance of four different state-of-the-art (SotA) OOD detection methods on multiple widely-used OOD datasets with diverse domain features, and 2) achieves new SotA performance on these benchmarks.