Tencent AI Lab 官网
Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization
Abstract

Large-scale distributed optimization is of great importance in various applications. For data-parallel based distributed learning, the inter-node gradient communication often becomes the performance bottleneck. In this paper, we propose the error compensated quantized stochastic gradient descent algorithm to improve the training efficiency. Local gradients are quantized to reduce the communication overhead, and accumulated quantization error is utilized to speed-up the convergence. Furthermore, we present theoretical guarantees on the convergence, and demonstrate its advantage over competitors. Extensive experiments indicate that our algorithm can compress gradients by a factor of up to two magnitudes, without any performance degradation.

Venue
2018 ICML
Publication Time
2018
Authors
Jiaxiang Wu, Weidong Huang, Junzhou Huang, and Tong Zhang