Hierarchies are of fundamental interest in both stochastic optimal control and biological control due to their facilitation of a range of desirable computational traits in a control algorithm and the possibility that they may form a core principle of sensorimotor and cognitive control systems. However, a theoretically justified construction of state-space hierarchies over all spatial resolutions and their evolution through a policy inference process remains elusive. Here, a formalism for deriving such normative representations of discrete Markov decision processes is introduced in the context of graphs. The resulting hierarchies correspond to a hierarchical policy inference algorithm approximating a discrete gradient flow between state-space trajectory densities generated by the prior and optimal policies.