The MIT Sloop system indexes and retrieves photographs from databases of non-stationary animal population distributions. To do this, it adaptively represents and matches generic visual feature representations using sparse relevance feedback from experts and crowds. Here, we describe the Sloop system and its application, then compare its approach to a standard deep learning formulation. We then show that priming with amplitude and deformation features requires very shallow networks to produce superior recognition results. Results suggest that relevance feedback, which enables Sloop's high-recall performance may also be essential for deep learning approaches to individual identification to deliver comparable results.