Creating large datasets of medical radiology images from several sources can be challenging because of the differences in the acquisition and storage standards. One possible way of controlling and/or assessing the image selection process is through medical image clustering. This, however, requires an efficient method for learning latent image representations. In this paper, we tackle the problem of fully-unsupervised clustering of medical images using pixel data only. We test the performance of several contemporary approaches, built on top of a convolutional autoencoder (CAE) - convolutional deep embedded clustering (CDEC) and convolutional improved deep embedded clustering (CIDEC) - and three approaches based on preset feature extraction - histogram of oriented gradients (HOG), local binary pattern (LBP) and principal component analysis (PCA). CDEC and CIDEC are end-to-end clustering solutions, involving simultaneous learning of latent representations and clustering assignments, whereas the remaining approaches rely on k-means clustering from fixed embeddings. We train the models on 30,000 images, and test them using a separate test set consisting of 8,000 images. We sampled the data from the PACS repository archive of the Clinical Hospital Centre Rijeka. For evaluation, we use silhouette score, homogeneity score and normalised mutual information (NMI) on two target parameters, closely associated with commonly occurring DICOM tags - Modality and anatomical region (adjusted BodyPartExamined tag). CIDEC attains an NMI score of 0.473 with respect to anatomical region, and CDEC attains an NMI score of 0.645 with respect to the tag Modality - both outperforming other commonly used feature descriptors.