Abstract:We introduce a new framework for data denoising, partially inspired by martingale optimal transport. For a given noisy distribution (the data), our approach involves finding the closest distribution to it among all distributions which 1) have a particular prescribed structure (expressed by requiring they lie in a particular domain), and 2) are self-consistent with the data. We show that this amounts to maximizing the variance among measures in the domain which are dominated in convex order by the data. For particular choices of the domain, this problem and a relaxed version of it, in which the self-consistency condition is removed, are intimately related to various classical approaches to denoising. We prove that our general problem has certain desirable features: solutions exist under mild assumptions, have certain robustness properties, and, for very simple domains, coincide with solutions to the relaxed problem. We also introduce a novel relationship between distributions, termed Kantorovich dominance, which retains certain aspects of the convex order while being a weaker, more robust, and easier-to-verify condition. Building on this, we propose and analyze a new denoising problem by substituting the convex order in the previously described framework with Kantorovich dominance. We demonstrate that this revised problem shares some characteristics with the full convex order problem but offers enhanced stability, greater computational efficiency, and, in specific domains, more meaningful solutions. Finally, we present simple numerical examples illustrating solutions for both the full convex order problem and the Kantorovich dominance problem.
Abstract:A principal curve serves as a powerful tool for uncovering underlying structures of data through 1-dimensional smooth and continuous representations. On the basis of optimal transport theories, this paper introduces a novel principal curve framework constrained by monotonicity with rigorous theoretical justifications. We establish statistical guarantees for our monotone curve estimate, including expected empirical and generalized mean squared errors, while proving the existence of such estimates. These statistical foundations justify adopting the popular early stopping procedure in machine learning to implement our numeric algorithm with neural networks. Comprehensive simulation studies reveal that the proposed monotone curve estimate outperforms competing methods in terms of accuracy when the data exhibits a monotonic structure. Moreover, through two real-world applications on future prices of copper, gold, and silver, and avocado prices and sales volume, we underline the robustness of our curve estimate against variable transformation, further confirming its effective applicability for noisy and complex data sets. We believe that this monotone curve-fitting framework offers significant potential for numerous applications where monotonic relationships are intrinsic or need to be imposed.