One of the popular approaches for low-rank tensor completion is to use the {\it latent trace norm} regularization. However, most existing works in this direction learn a sparse combination of tensors. In this work, we fill this gap by proposing a variant of the latent trace norm that helps in learning a non-sparse combination of tensors. We develop a dual framework for solving the proposed low-rank tensor completion problem. In this framework, we first show a novel characterization of the solution space with an interesting factorization of the optimal solution. This allows to propose two scalable optimization formulations. The problems are shown to lie on a Cartesian product of Riemannian spectrahedron manifolds. We exploit the versatile Riemannian optimization framework for proposing computationally efficient trust region algorithms. The experiments illustrate the efficacy of the proposed algorithms on several real-world datasets across applications.