Recent applications of deep learning to navigation have generated end-to-end navigation solutions whereby visual sensor input is mapped to control signals or to motion primitives. The resulting visual navigation strategies work very well at collision avoidance and have performance that matches traditional reactive navigation algorithms while operating in real-time. It is accepted that these solutions cannot provide the same level of performance as a global planner. However, it is less clear how such end-to-end systems should be integrated into a full navigation pipeline. We evaluate the typical end-to-end solution within a full navigation pipeline in order to expose its weaknesses. Doing so illuminates how to better integrate deep learning methods into the navigation pipeline. In particular, we show that they are an efficient means to provide informed samples for sample-based planners. Controlled simulations with comparison against traditional planners show that the number of samples can be reduced by an order of magnitude while preserving navigation performance. Implementation on a mobile robot matches the simulated performance outcomes.