Tencent AI Lab 官网
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication
Abstract

Recently, the decentralized optimization problem is attracting growing attention. Most existing methods are deterministic with high per-iteration cost and have a convergence rate quadratically depending on the problem condition number. Besides, the dense communication is necessary to ensure the convergence even if the dataset is sparse. In this paper, we generalize the decentralized optimization problem to a monotone operator root finding problem, and propose a stochastic algorithm named DSBA that (i) converges geometrically with a rate linearly depending on the problem condition number, and (ii) can be implemented using sparse communication only. Additionally, DSBA handles learning problems like AUC-maximization which cannot be tackled efficiently in the decentralized setting. Experiments on convex minimization and AUC-maximization validate the efficiency of our method.

Venue
2018 ICML
Publication Time
2018
Authors
Zebang Shen, Aryan Mokhtari, Tengfei Zhou, Peilin Zhao, Hui Qian