This paper describes a multimodal vision sensor that integrates three types of cameras, including a stereo camera, a polarization camera and a panoramic camera. Each sensor provides a specific dimension of information: the stereo camera measures depth per pixel, the polarization obtains the degree of polarization, and the panoramic camera captures a 360-degree landscape. Data fusion and advanced environment perception could be built upon the combination of sensors. Designed especially for autonomous driving, this vision sensor is shipped with a robust semantic segmentation network. In addition, we demonstrate how cross-modal enhancement could be achieved by registering the color image and the polarization image. An example of water hazard detection is given. To prove the multimodal vision sensor's compatibility with different devices, a brief runtime performance analysis is carried out.