Random device mismatch that arises as a result of scaling of the CMOS (complementary metal-oxide semi-conductor) technology into the deep submicron regime degrades the accuracy of analogue circuits. Methods to combat this increase the complexity of design. We have developed a novel neuromorphic system called a Trainable Analogue Block (TAB), which exploits device mismatch as a means for random projections of the input to a higher dimensional space. The TAB framework is inspired by the principles of neural population coding operating in the biological nervous system. Three neuronal layers, namely input, hidden, and output, constitute the TAB framework, with the number of hidden layer neurons far exceeding the input layer neurons. Here, we present measurement results of the first prototype TAB chip built using a 65nm process technology and show its learning capability for various regression tasks. Our TAB chip exploits inherent randomness and variability arising due to the fabrication process to perform various learning tasks. Additionally, we characterise each neuron and discuss the statistical variability of its tuning curve that arises due to random device mismatch, a desirable property for the learning capability of the TAB. We also discuss the effect of the number of hidden neurons and the resolution of output weights on the accuracy of the learning capability of the TAB.