Classical Visual Simultaneous Localization and Mapping (VSLAM) algorithms can be easily induced to fail when either the robot's motion or the environment is too challenging. The use of Deep Neural Networks to enhance VSLAM algorithms has recently achieved promising results, which we call hybrid methods. In this paper, we compare the performance of hybrid monocular VSLAM methods with different learned feature descriptors. To this end, we propose a set of experiments to evaluate the robustness of the algorithms under different environments, camera motion, and camera sensor noise. Experiments conducted on KITTI and Euroc MAV datasets confirm that learned feature descriptors can create more robust VSLAM systems.