Abstract:Mean field variational inference (VI) is the problem of finding the closest product (factorized) measure, in the sense of relative entropy, to a given high-dimensional probability measure $\rho$. The well known Coordinate Ascent Variational Inference (CAVI) algorithm aims to approximate this product measure by iteratively optimizing over one coordinate (factor) at a time, which can be done explicitly. Despite its popularity, the convergence of CAVI remains poorly understood. In this paper, we prove the convergence of CAVI for log-concave densities $\rho$. If additionally $\log \rho$ has Lipschitz gradient, we find a linear rate of convergence, and if also $\rho$ is strongly log-concave, we find an exponential rate. Our analysis starts from the observation that mean field VI, while notoriously non-convex in the usual sense, is in fact displacement convex in the sense of optimal transport when $\rho$ is log-concave. This allows us to adapt techniques from the optimization literature on coordinate descent algorithms in Euclidean space.
Abstract:What is the optimal way to approximate a high-dimensional diffusion process by one in which the coordinates are independent? This paper presents a construction, called the \emph{independent projection}, which is optimal for two natural criteria. First, when the original diffusion is reversible with invariant measure $\rho_*$, the independent projection serves as the Wasserstein gradient flow for the relative entropy $H(\cdot\,|\,\rho_*)$ constrained to the space of product measures. This is related to recent Langevin-based sampling schemes proposed in the statistical literature on mean field variational inference. In addition, we provide both qualitative and quantitative results on the long-time convergence of the independent projection, with quantitative results in the log-concave case derived via a new variant of the logarithmic Sobolev inequality. Second, among all processes with independent coordinates, the independent projection is shown to exhibit the slowest growth rate of path-space entropy relative to the original diffusion. This sheds new light on the classical McKean-Vlasov equation and recent variants proposed for non-exchangeable systems, which can be viewed as special cases of the independent projection.
Abstract:The classical (overdamped) Langevin dynamics provide a natural algorithm for sampling from its invariant measure, which uniquely minimizes an energy functional over the space of probability measures, and which concentrates around the minimizer(s) of the associated potential when the noise parameter is small. We introduce analogous diffusion dynamics that sample from an entropy-regularized optimal transport, which uniquely minimizes the same energy functional but constrained to the set $\Pi(\mu,\nu)$ of couplings of two given marginal probability measures $\mu$ and $\nu$ on $\mathbb{R}^d$, and which concentrates around the optimal transport coupling(s) for small regularization parameter. More specifically, our process satisfies two key properties: First, the law of the solution at each time stays in $\Pi(\mu,\nu)$ if it is initialized there. Second, the long-time limit is the unique solution of an entropic optimal transport problem. In addition, we show by means of a new log-Sobolev-type inequality that the convergence holds exponentially fast, for sufficiently large regularization parameter and for a class of marginals which strictly includes all strongly log-concave measures. By studying the induced Wasserstein geometry of the submanifold $\Pi(\mu,\nu)$, we argue that the SDE can be viewed as a Wasserstein gradient flow on this space of couplings, at least when $d=1$, and we identify a conjectural gradient flow for $d \ge 2$. The main technical difficulties stems from the appearance of conditional expectation terms which serve to constrain the dynamics to $\Pi(\mu,\nu)$.