Abstract:The effective receptive field of a fully convolutional neural network is an important consideration when designing an architecture, as it defines the portion of the input visible to each convolutional kernel. We propose a neural network module, extending traditional skip connections, called the translated skip connection. Translated skip connections geometrically increase the receptive field of an architecture with negligible impact on both the size of the parameter space and computational complexity. By embedding translated skip connections into a benchmark architecture, we demonstrate that our module matches or outperforms four other approaches to expanding the effective receptive fields of fully convolutional neural networks. We confirm this result across five contemporary image segmentation datasets from disparate domains, including the detection of COVID-19 infection, segmentation of aerial imagery, common object segmentation, and segmentation for self-driving cars.
Abstract:Dictionary learning and sparse coding have been widely studied as mechanisms for unsupervised feature learning. Unsupervised learning could bring enormous benefit to the processing of hyperspectral images and to other remote sensing data analysis because labelled data are often scarce in this field. We propose a method for clustering the pixels of hyperspectral images using sparse coefficients computed from a representative dictionary as features. We show empirically that the proposed method works more effectively than clustering on the original pixels. We also demonstrate that our approach, in certain circumstances, outperforms the clustering results of features extracted using principal component analysis and non-negative matrix factorisation. Furthermore, our method is suitable for applications in repetitively clustering an ever-growing amount of high-dimensional data, which is the case when working with hyperspectral satellite imagery.