Convolutional neural networks (CNNs) are able to attain better visual recognition performance than fully connected neural networks despite having much less parameters due to their parameter sharing principle. Hence, modern architectures are designed to contain a very small number of fully-connected layers, often at the end, after multiple layers of convolutions. It is interesting to observe that we can replace large fully-connected layers with relatively small groups of tiny matrices applied on the entire image. Moreover, although this strategy already reduces the number of parameters, most of the convolutions can be eliminated as well, without suffering any loss in recognition performance. However, there is no solid recipe to detect this hidden subset of convolutional neurons that is responsible for the majority of the recognition work. Hence, in this work, we use the matrix characteristics based on eigenvalues in addition to the classical weight-based importance assignment approach for pruning to shed light on the internal mechanisms of a widely used family of CNNs, namely residual neural networks (ResNets), for the image classification problem using CIFAR-10, CIFAR-100 and Tiny ImageNet datasets.