腾讯 AI Lab 官网

腾讯 AI Lab 官网
Efficient Distributed Learning with Sparsity
Abstract View the Paper
We propose a novel, efficient approach for distributed sparse learning with observations randomly partitioned across machines. In each round of the proposed method, worker machines compute the gradient of the loss on local data and the master machine solves a shifted l₁ regularized loss minimization problem. After a number of communication rounds that scales only logarithmically with the number of machines, and independent of other parameters of the problem, the proposed approach provably matches the estimation error bound of centralized methods.
2017 ICML
Publication Time
Aug 2017
Jialei Wang, Mladen Kolar, Nathan Srebro, Tong Zhang