Humans are able to think of complex tasks as combinations of simpler subtasks in order to learn the complex tasks more efficiently. For example, a backflip could be considered a combination of four subskills: jumping, tucking knees, rolling backwards, and thrusting arms downwards. Motivated by this line of reasoning, we propose a new algorithm that trains neural network policies on simple, easy-to-learn skills in order to cultivate latent spaces that accelerate adversarial imitation learning of complex, hard-to-learn skills. We evaluate our algorithm on a difficult task in a high-dimensional environment and see that it consistently outperforms a state-of-the-art baseline in training speed and overall task performance.