End-to-end automatic speech recognition (ASR) systems are increasingly being favoured due to their direct treatment of the problem of speech to text conversion. However, these systems are known to be data hungry and hence underperform in low-resource settings. In this work, we propose a seemingly simple but effective technique to improve low-resource end-to-end ASR performance. We compress the output vocabulary of the end-to-end ASR system using linguistically meaningful reductions and then reconstruct the original vocabulary using a standalone module. Our objective is two-fold: to lessen the burden on the low-resource end-to-end ASR system by reducing the output vocabulary space and to design a powerful reconstruction module that recovers sequences in the original vocabulary from sequences in the reduced vocabulary. We present two reconstruction modules, an encoder decoder-based architecture and a finite state transducer-based model. We demonstrate the efficacy of our proposed techniques using ASR systems for two Indian languages, Gujarati and Telugu.