Abstract:We revisit FedExProx - a recently proposed distributed optimization method designed to enhance convergence properties of parallel proximal algorithms via extrapolation. In the process, we uncover a surprising flaw: its known theoretical guarantees on quadratic optimization tasks are no better than those offered by the vanilla Gradient Descent (GD) method. Motivated by this observation, we develop a novel analysis framework, establishing a tighter linear convergence rate for non-strongly convex quadratic problems. By incorporating both computation and communication costs, we demonstrate that FedExProx can indeed provably outperform GD, in stark contrast to the original analysis. Furthermore, we consider partial participation scenarios and analyze two adaptive extrapolation strategies - based on gradient diversity and Polyak stepsizes - again significantly outperforming previous results. Moving beyond quadratics, we extend the applicability of our analysis to general functions satisfying the Polyak-Lojasiewicz condition, outperforming the previous strongly convex analysis while operating under weaker assumptions. Backed by empirical results, our findings point to a new and stronger potential of FedExProx, paving the way for further exploration of the benefits of extrapolation in federated learning.
Abstract:Observable operator models (OOMs) offer a powerful framework for modelling stochastic processes, surpassing the traditional hidden Markov models (HMMs) in generality and efficiency. However, using OOMs to model infinite-dimensional processes poses significant theoretical challenges. This article explores a rigorous approach to developing an approximation theory for OOMs of infinite-dimensional processes. Building upon foundational work outlined in an unpublished tutorial [Jae98], an inner product structure on the space of future distributions is rigorously established and the continuity of observable operators with respect to the associated 2-norm is proven. The original theorem proven in this thesis describes a fundamental obstacle in making an infinite-dimensional space of future distributions into a Hilbert space. The presented findings lay the groundwork for future research in approximating observable operators of infinite-dimensional processes, while a remedy to the encountered obstacle is suggested.