FLECS: A Federated Learning Second-Order Framework via Compression and Sketching
FLECS: A Federated Learning Second-Order Framework via Compression and Sketching
Jun 4, 2022·
,,,·
0 min read
Artem Agafonov

Dmitry Kamzolov
Rachael Tappenden
Alexander Gasnikov
Martin Takáč
Abstract
Inspired by the recent work FedNL (Safaryan et al, FedNL Making Newton-Type Methods Applicable to Federated Learning), we propose a new communication efficient second-order framework for Federated learning, namely FLECS. The proposed method reduces the high-memory requirements of FedNL by the usage of an L-SR1 type update for the Hessian approximation which is stored on the central server. A low dimensional `sketch’ of the Hessian is all that is needed by each device to generate an update, so that memory costs as well as number of Hessian-vector products for the agent are low. Biased and unbiased compressions are utilized to make communication costs also low. Convergence guarantees for FLECS are provided in both the strongly convex, and nonconvex cases, and local linear convergence is also established under strong convexity. Numerical experiments confirm the practical benefits of this new FLECS algorithm.
Type
Publication
preprint, under review