Abstract:Deep neural networks are powerful machine learning approaches that have exhibited excellent results on many classification tasks. However, they are considered as black boxes and some of their properties remain to be formalized. In the context of image recognition, it is still an arduous task to understand why an image is recognized or not. In this study, we formalize some properties shared by eight state-of-the-art deep neural networks in order to grasp the principles allowing a given deep neural network to classify an image. Our results, tested on these eight networks, show that an image can be sub-divided into several regions (patches) responding at different degrees of probability (local property). With the same patch, some locations in the image can answer two (or three) orders of magnitude higher than other locations (spatial property). Some locations are activators and others inhibitors (activation-inhibition property). The repetition of the same patch can increase (or decrease) the probability of recognition of an object (cumulative property). Furthermore, we propose a new approach called Deepception that exploits these properties to deceive a deep neural network. We obtain for the VGG-VDD-19 neural network a fooling ratio of 88\%. Thanks to our "Psychophysics" approach, no prior knowledge on the networks architectures is required.