In vision-based object classification systems, imaging sensors perceive the environment and then objects are detected and classified for decision-making purposes. Vulnerabilities in the perception domain enable an attacker to inject false data into the sensor which could lead to unsafe consequences. In this work, we focus on camera-based systems and propose GhostImage attacks, with the goal of either creating a fake perceived object or obfuscating the object's image that leads to wrong classification results. This is achieved by remotely projecting adversarial patterns into camera-perceived images, exploiting two common effects in optical imaging systems, namely lens flare/ghost effects, and auto-exposure control. To improve the robustness of the attack to channel perturbations, we generate optimal input patterns by integrating adversarial machine learning techniques with a trained end-to-end channel model. We realize GhostImage attacks with a projector, and conducted comprehensive experiments, using three different image datasets, in indoor and outdoor environments, and three different cameras. We demonstrate that GhostImage attacks are applicable to both autonomous driving and security surveillance scenarios. Experiment results show that, depending on the projector-camera distance, attack success rates can reach as high as 100%.