Neural implicit representations have shown substantial improvements in efficiently storing 3D data, when compared to conventional formats. However, the focus of existing work has mainly been on storage and subsequent reconstruction. In this work, we argue that training neural representations for both reconstruction tasks, alongside conventional tasks, can produce more general encodings that admit equal quality reconstructions to single task training, whilst providing improved results on conventional tasks when compared to single task encodings. Through multi-task experiments on reconstruction, classification, and segmentation our approach learns feature rich encodings that produce high quality results for each task. We also reformulate the segmentation task, creating a more representative challenge for implicit representation contexts.