Single Image Super Resolution (SISR) is the process of mapping a low resolution image to a high resolution image. This inherently has applications in remote sensing as a way to increase the spatial resolution in satellite imagery. This suggests a possible improvement to automated target recognition in image classification and object detection. We explore the effect that different training sets have on SISR with the network, Super Resolution Generative Adversarial Network (SRGAN). We train 5 SRGANs on different land-use classes (e.g. agriculture, cities, ports) and test them on the same unseen data set. We attempt to find the qualitative and quantitative differences in SISR, binary classification, and object detection performance. We find that curated training sets that contain objects in the test ontology perform better in the computer vision tasks. However, Super Resolution (SR) might not be beneficial to certain problems and will see a diminishing amount of returns for data sets that are closer to being solved.