Abstract:Markov Decision Process (MDP) is the underlying model for optimal planning for decision-theoretic agents in stochastic environments. Although much research focuses on solving MDP problems both in tabular form or using factored representations, none focused on tensor decomposition methods. Solving MDPs using tensor algebra offers the prospect of leveraging advances in tensor-based computations to further increase solver efficiency. In this paper, we develop an MDP solver for a multidimensional problem using a tensor decomposition method to compress the transition models and optimize the value iteration and policy iteration algorithms. We empirically evaluate our approach against tabular methods and show our approach can compute much larger problems using substantially less memory, opening up new possibilities for tensor-based approaches in stochastic planning