Pre-trained deep nets are commonly used to improve accuracies and training times for neural nets. It is generally assumed that pre-training a net for optimal source task performance best prepares it to learn an arbitrary target task. This is generally not true. Stopping source task training, prior to optimal performance, can create a pre-trained net better suited for learning a new task. We performed several experiments demonstrating this effect, as well as the influence of amount of training and of learning rate. Additionally, we show that this reflects a general loss of learning ability that even extends to relearning the source task