In a lot of scientific problems, there is the need to generate data through the running of an extensive number of experiments. Further, some tasks require constant human intervention. We consider the problem of crack detection in steel plates. The way in which this generally happens is through humans looking at an image of the thermogram generated by heating the plate and classifying whether it is cracked or not. There has been a rise in the use of Artificial Intelligence (AI) based methods which try to remove the requirement of a human from this loop by using algorithms such as Convolutional Neural Netowrks (CNN)s as a proxy for the detection process. The issue is that CNNs and other vision models are generally very data-hungry and require huge amounts of data before they can start performing well. This data generation process is not very easy and requires innovation in terms of mechanical and electronic design of the experimental setup. It further requires massive amount of time and energy, which is difficult in resource-constrained scenarios. We try to solve exactly this problem, by creating a synthetic data generation pipeline based on Finite Element Simulations. We employ data augmentation techniques on this data to further increase the volume and diversity of data generated. The working of this concept is shown via performing inference on fine-tuned vision models and we have also validated the results by checking if our approach translates to realistic experimental data. We show the conditions where this translation is successful and how we can go about achieving that.