Nowadays, the susceptibility of deep neural networks (DNNs) has garnered significant attention. Researchers are exploring patch-based physical attacks, yet traditional approaches, while effective, often result in conspicuous patches covering target objects. This leads to easy detection by human observers. Recently, novel camera-based physical attacks have emerged, leveraging camera patches to execute stealthy attacks. These methods circumvent target object modifications by introducing perturbations directly to the camera lens, achieving a notable breakthrough in stealthiness. However, prevailing camera-based strategies necessitate the deployment of multiple patches on the camera lens, which introduces complexity. To address this issue, we propose an Adversarial Camera Patch (ADCP).