There has been an increased interest in the application of convolutional neural networks for image based malware classification, but the susceptibility of neural networks to adversarial examples allows malicious actors to evade classifiers. We shed light on the definition of an adversarial example in the malware domain. Then, we propose a method to obfuscate malware using patterns found in adversarial examples such that the newly obfuscated malware evades classification while maintaining executability and the original program logic.