Abstract:Modern datasets, from areas such as neuroimaging and geostatistics, often come in the form of a random sample of tensor-valued data which can be understood as noisy observations of an underlying smooth multidimensional random function. Many of the traditional techniques from functional data analysis are plagued by the curse of dimensionality and quickly become intractable as the dimension of the domain increases. In this paper, we propose a framework for learning multidimensional continuous representations from a random sample of tensors that is immune to several manifestations of the curse. These representations are defined to be multiplicatively separable and adapted to the data according to an $L^{2}$ optimality criteria, analogous to a multidimensional functional principal components analysis. We show that the resulting estimation problem can be solved efficiently by the tensor decomposition of a carefully defined reduction transformation of the observed data. The incorporation of both regularization and dimensionality reduction is discussed. The advantages of the proposed method over competing methods are demonstrated in a simulation study. We conclude with a real data application in neuroimaging.