Image denoising techniques have been widely employed in multimedia devices as an image post-processing operation that can remove sensor noise and produce visually clean images for further AI tasks, e.g., image classification. In this paper, we investigate a new task, adversarial denoise attack, that stealthily embeds attacks inside the image denoising module. Thus it can simultaneously denoise input images while fooling the state-of-the-art deep models. We formulate this new task as a kernel prediction problem and propose the adversarial-denoising kernel prediction that can produce adversarial-noiseless kernels for effective denoising and adversarial attacking simultaneously. Furthermore, we implement an adaptive perceptual region localization to identify semantic-related vulnerability regions with which the attack can be more effective while not doing too much harm to the denoising. Thus, our proposed method is termed as Pasadena (Perceptually Aware and Stealthy Adversarial DENoise Attack). We validate our method on the NeurIPS'17 adversarial competition dataset and demonstrate that our method not only realizes denoising but has advantages of high success rate and transferability over the state-of-the-art attacks.