An Accelerated Second-Order Method for Distributed Stochastic Optimization
An Accelerated Second-Order Method for Distributed Stochastic Optimization
Mar 26, 2021·,,,
,·
0 min read
Artem Agafonov
Pavel Dvurechensky
Gesualdo Scutari
Alexander Gasnikov

Dmitry Kamzolov
Alexander Lukashevich
Abstract
We consider centralized distributed algorithms for general stochastic convex optimization problems which we approximate by a finite-sum minimization problem with summands distributed among computational nodes. We exploit statistical similarity between the summands and the whole sum to construct a distributed accelerated cubic-regularized Newton’s method that achieves lower communication complexity bound for this setting and improves upon existing upper bound. Further, we use this algorithm to obtain convergence rate bounds for the original stochastic optimization problem and compare our bounds with the existing ones in several regimes when the goal is to minimize the number of communication rounds and improve the parallelization when increasing the number of workers.
Type
Publication
In 2021 60th IEEE Conference on Decision and Control (CDC)