Abstract:This paper develops a fully distributed differentially-private learning algorithm to solve nonsmooth optimization problems. We distribute the Alternating Direction Method of Multipliers (ADMM) to comply with the distributed setting and employ an approximation of the augmented Lagrangian to handle nonsmooth objective functions. Furthermore, we ensure zero-concentrated differential privacy (zCDP) by perturbing the outcome of the computation at each agent with a variance-decreasing Gaussian noise. This privacy-preserving method allows for better accuracy than the conventional $(\epsilon, \delta)$-DP and stronger guarantees than the more recent R\'enyi-DP. The developed fully distributed algorithm has a competitive privacy accuracy trade-off and handles nonsmooth and non-necessarily strongly convex problems. We provide complete theoretical proof for the privacy guarantees and the convergence of the algorithm to the exact solution. We also prove under additional assumptions that the algorithm converges in linear time. Finally, we observe in simulations that the developed algorithm outperforms all of the existing methods.
Abstract:We develop a new consensus-based distributed algorithm for solving learning problems with feature partitioning and non-smooth convex objective functions. Such learning problems are not separable, i.e., the associated objective functions cannot be directly written as a summation of agent-specific objective functions. To overcome this challenge, we redefine the underlying optimization problem as a dual convex problem whose structure is suitable for distributed optimization using the alternating direction method of multipliers (ADMM). Next, we propose a new method to solve the minimization problem associated with the ADMM update step that does not rely on any conjugate function. Calculating the relevant conjugate functions may be hard or even unfeasible, especially when the objective function is non-smooth. To obviate computing any conjugate function, we solve the optimization problem associated with each ADMM iteration in the dual domain utilizing the block coordinate descent algorithm. Unlike the existing related algorithms, the proposed algorithm is fully distributed and does away with the conjugate of the objective function. We prove theoretically that the proposed algorithm attains the optimal centralized solution. We also confirm its network-wide convergence via simulations.