It is becoming increasingly common to utilize pre-trained models provided by third parties due to their convenience. At the same time, however, these models may be vulnerable to both poisoning and evasion attacks. We introduce an algorithmic framework that can mitigate potential security vulnerabilities in a pre-trained model when clean data from its training distribution is unavailable to the defender. The framework reverse-engineers samples from a given pre-trained model. The resulting synthetic samples can then be used as a substitute for clean data to perform various defenses. We consider two important attack scenarios -- backdoor attacks and evasion attacks -- to showcase the utility of synthesized samples. For both attacks, we show that when supplied with our synthetic data, the state-of-the-art defenses perform comparably or sometimes even better than the case when it's supplied with the same amount of clean data.