The majority of ML research concerns slow, statistical learning of i.i.d. samples from large, labelled datasets. Animals do not learn this way. An enviable characteristic of animal learning is 'episodic' learning - the ability to rapidly memorize a specific experience as a composition of existing concepts, without provided labels. The new knowledge can then be used to distinguish between similar experiences, to generalize between classes, and to selectively consolidate to long-term memory. The Hippocampus is known to be vital to these abilities. AHA is a biologically-plausible computational model of the Hippocampus. Unlike most machine learning models, AHA is trained without any external labels and uses only local and immediate credit assignment. We demonstrate AHA in a superset of the Omniglot classification benchmark. The extended benchmark covers a wider range of known Hippocampal functions by testing pattern separation, completion, and reconstruction of original input. These functions are all performed within a single configuration of the computational model. Despite these constraints, results are comparable to state-of-the-art deep convolutional ANNs. In addition to the demonstrated high degree of functional overlap with the Hippocampal region, AHA is remarkably aligned to current macro-scale biological models and uses biologically plausible micro-scale learning rules.