Generalization and reuse of agent behaviour across a variety of learning tasks promises to carry the next wave of breakthroughs in Reinforcement Learning (RL). The field of Curriculum Learning proposes strategies that aim to support a learning agent by exposing it to a tailored series of tasks throughout learning, e.g. by progressively increasing their complexity. In this paper, we consider recently established results in Curriculum Learning for episodic RL, proposing an extension that is easily integrated with well-known RL algorithms and providing a theoretical formulation from an RL-as-Inference perspective. We evaluate the proposed scheme with different Deep RL algorithms on representative tasks, demonstrating that it is capable of significantly improving learning performance.