Imitation learning for end-to-end autonomous driving has drawn attention from academic communities. Current methods either only use images as the input which is ambiguous when a car approaches an intersection, or use additional command information to navigate the vehicle but not automated enough. Focusing on making the vehicle drive along the given path, we propose a new navigation command that does not require human's participation and a novel model architecture called angle branched network. Both the new navigation command and the angle branched network are easy to understand and effective. Besides, we find that not only segmentation information but also depth information can boost the performance of the driving model. We conduct experiments in a 3D urban simulator and both qualitative and quantitative evaluation results show the effectiveness of our model.