Image retrieval is one of the most popular tasks in computer vision. However, the proposed approaches in the literature can be roughly categorized into two groups: category- and instance-based retrieval. In this work, we show that the retrieval task is much richer and more complex, and can be viewed as a continuous spectrum spanning the space among these operational points. Hence, we propose to tackle a novel retrieval task where we want to smoothly traverse the simplex from category- to instance- and attribute-based retrieval. We propose a novel deep network architecture that learns to decompose an input query image into its basic components of categorical and attribute information. Moreover, using a continuous control parameter, our model learns to reconstruct a new embedding of the query by mixing these two signals, with different proportions, to target a specific point along the retrieval simplex. We demonstrate our idea in a detailed evaluation of the proposed model and highlight the advantages of our approach against a set of well-established retrieval model baselines.