腾讯 AI Lab 官网

腾讯 AI Lab 官网
On Stochastic Primal-Dual Hybrid Gradient Approach for Compositely Regularized Minimization
Abstract View the Paper
We consider a wide spectrum of regularized stochastic minimization problems, where the regularization term is composite with a linear function. Examples of this formulation include graph-guided regularized minimization, generalized Lasso and a class of l_1 regularized problems. The computational challenge is that the closed-form solution of the proximal mapping associated with the regularization term is not available due to the imposed linear composition. Fortunately, the structure of the regularization term allows us to reformulate it as a new convex-concave saddle point problem which can be solved using the Primal-Dual Hybrid Gradient (PDHG) approach. However, this approach may be inefficient in realistic applications as computing the full gradient of the expected objective function could be very expensive when the number of input data samples is considerably large. To address this issue, we propose a Stochastic PDHG (SPDHG) algorithm with either uniformly or nonuniformly averaged iterates. Through uniformly averaged iterates, the SPDHG algorithm converges in expectation with O(1⁄√t) rate for general convex objectives and O(log(t)/t) rate for strongly convex objectives, respectively. While with non-uniformly averaged iterates, the SPDHG algorithm is expected to converge with O(1/t) rate for strongly convex objectives. Numerical experiments on different genres of datasets demonstrate that our proposed algorithm outperforms other competing algorithms.
Venue
European Conference on Artificial Intelligence (ECAI)
Publication Time
2016
Authors
Linbo Qiao, Tianyi Lin, Yu-Gang Jiang, Fan Yang, Wei Liu, and Xicheng Lu