Abstract:Manifold learning is a central task in modern statistics and data science. Many datasets (cells, documents, images, molecules) can be represented as point clouds embedded in a high dimensional ambient space, however the degrees of freedom intrinsic to the data are usually far fewer than the number of ambient dimensions. The task of detecting a latent manifold along which the data are embedded is a prerequisite for a wide family of downstream analyses. Real-world datasets are subject to noisy observations and sampling, so that distilling information about the underlying manifold is a major challenge. We propose a method for manifold learning that utilises a symmetric version of optimal transport with a quadratic regularisation that constructs a sparse and adaptive affinity matrix, that can be interpreted as a generalisation of the bistochastic kernel normalisation. We prove that the resulting kernel is consistent with a Laplace-type operator in the continuous limit, establish robustness to heteroskedastic noise and exhibit these results in simulations. We identify a highly efficient computational scheme for computing this optimal transport for discrete data and demonstrate that it outperforms competing methods in a set of examples.
Abstract:We consider the conjecture proposed in Matsumoto, Zhang and Schiebinger (2022) suggesting that optimal transport with quadratic regularisation can be used to construct a graph whose discrete Laplace operator converges to the Laplace--Beltrami operator. We derive first order optimal potentials for the problem under consideration and find that the resulting solutions exhibit a surprising resemblance to the well-known Barenblatt--Prattle solution of the porous medium equation. Then, relying on these first order optimal potentials, we derive the pointwise $L^2$-limit of such discrete operators built from an i.i.d. random sample on a smooth compact manifold. Simulation results complementing the limiting distribution results are also presented.