Robust and reliable ego-motion is a key component of most autonomous mobile systems. Many odometry estimation methods have been developed using different sensors such as cameras or LiDARs. In this work, we present a resilient approach that exploits the redundancy of multiple odometry algorithms using a 3D LiDAR scanner and a monocular camera to provide reliable state estimation for autonomous vehicles. Our system utilizes a stack of odometry algorithms that run in parallel. It chooses from them the most promising pose estimation considering sanity checks using dynamic and kinematic constraints of the vehicle as well as a score computed between the current LiDAR scan and a locally built point cloud map. In this way, our method can exploit the advantages of different existing ego-motion estimating approaches. We evaluate our method on the KITTI Odometry dataset. The experimental results suggest that our approach is resilient to failure cases and achieves an overall better performance than individual odometry methods employed by our system.