In this paper, we propose a novel approach for mining different program features by analysing the internal behaviour of a deep neural network trained on source code. Using an unlabelled dataset of Java programs and three different embedding strategies for the methods in the dataset, we train an autoencoder for each program embedding and then we test the emerging ability of the internal neurons in autonomously building internal representations for different program features. We defined three binary classification labelling policies inspired by real programming issues, so to test the performance of each neuron in classifying programs accordingly to these classification rules, showing that some neurons can actually detect different program properties. We also analyse how the program representation chosen as input affects the performance on the aforementioned tasks. On the other hand, we are interested in finding the overall most informative neurons in the network regardless of a given task. To this aim, we propose and evaluate two methods for ranking neurons independently of any property. Finally, we discuss how these ideas can be applied in different settings for simplifying the programmers' work, for instance if included in environments such as software repositories or code editors.