Abstract:Deep Convolutional Neural Networks (DCNNs) have become the state-of-the-art computational models of biological object recognition. Their remarkable success has helped vision science break new ground. Consequently, recent efforts have started to transfer this achievement to the domain of biological face recognition. In this regard, face detection can be investigated through comparisons of face-selective biological areas and neurons to artificial layers and units. Similarly, face identification can be examined through comparisons of in vivo and in silico face space representations. In this mini-review, we summarize the first studies with this aim. We argue that DCNNs are useful models, which follow the general hierarchical organization of biological face recognition. In two spotlights, we emphasize unique scientific contributions of these models. Firstly, studies on face detection in DCNNs propose that elementary face-selectivity emerges automatically through feedforward processes. Secondly, studies on face identification in DCNNs suggest that experience and additional generative mechanisms are required for this challenge. Taken together, as this novel computational approach enables close control of predisposition (i.e., architecture) and experience (i.e., training data), this could also inform longstanding debates on the substrates of biological face recognition.
Abstract:Deep Convolutional Neural Networks (DCNNs) were originally inspired by principles of biological vision, have evolved into best current computational models of object recognition, and consequently indicate strong architectural and functional parallelism with the ventral visual pathway throughout comparisons with neuroimaging and neural time series data. As recent advances in deep learning seem to decrease this similarity, computational neuroscience is challenged to reverse-engineer the biological plausibility to obtain useful models. While previous studies have shown that biologically inspired architectures are able to amplify the human-likeness of the models, in this study, we investigate a purely data-driven approach. We use human eye tracking data to directly modify training examples and thereby guide the models' visual attention during object recognition in natural images either towards or away from the focus of human fixations. We compare and validate different manipulation types (i.e., standard, human-like, and non-human-like attention) through GradCAM saliency maps against human participant eye tracking data. Our results demonstrate that the proposed guided focus manipulation works as intended in the negative direction and non-human-like models focus on significantly dissimilar image parts compared to humans. The observed effects were highly category-specific, enhanced by animacy and face presence, developed only after feedforward processing was completed, and indicated a strong influence on face detection. With this approach, however, no significantly increased human-likeness was found. Possible applications of overt visual attention in DCNNs and further implications for theories of face detection are discussed.
Abstract:Deep convolutional neural networks (DCNNs) and the ventral visual pathway share vast architectural and functional similarities in visual challenges such as object recognition. Recent insights have demonstrated that both hierarchical cascades can be compared in terms of both exerted behavior and underlying activation. However, these approaches ignore key differences in spatial priorities of information processing. In this proof-of-concept study, we demonstrate a comparison of human observers (N = 45) and three feedforward DCNNs through eye tracking and saliency maps. The results reveal fundamentally different resolutions in both visualization methods that need to be considered for an insightful comparison. Moreover, we provide evidence that a DCNN with biologically plausible receptive field sizes called vNet reveals higher agreement with human viewing behavior as contrasted with a standard ResNet architecture. We find that image-specific factors such as category, animacy, arousal, and valence have a direct link to the agreement of spatial object recognition priorities in humans and DCNNs, while other measures such as difficulty and general image properties do not. With this approach, we try to open up new perspectives at the intersection of biological and computer vision research.
Abstract:For a considerable time, deep convolutional neural networks (DCNNs) have reached human benchmark performance in object recognition. On that account, computational neuroscience and the field of machine learning have started to attribute numerous similarities and differences to artificial and biological vision. This study aims towards a behavioral comparison of visual core object recognition between humans and feedforward neural networks in a classification learning paradigm on an ImageNet data set. For this purpose, human participants (n = 65) competed in an online experiment against different feedforward DCNNs. The designed approach based on a typical learning process of seven different monkey categories included a training and validation phase with natural examples, as well as a testing phase with novel shape and color manipulations. Analyses of accuracy revealed that humans not only outperform DCNNs on all conditions, but also display significantly greater robustness towards shape and most notably color alterations. Furthermore, a precise examination of behavioral patterns highlights these findings by revealing independent classification errors between the groups. The obtained results endorse an implementation of recurrent circuits similar to the primate ventral stream in artificial vision models as a way to achieve adequate object generalization abilities across unexperienced manipulations.