Tencent AI Lab 官网
On Quadratic Convergence of DC Proximal Newton Algorithm in Nonconvex Sparse Learning
Abstract View the Paper

We propose a DC proximal Newton algorithm for solving nonconvex regularized sparse learning problems in high dimensions. Our proposed algorithm integrates the proximal Newton algorithm with multi-stage convex relaxation based on difference of convex (DC) programming, and enjoys both strong computational and statistical guarantees. Specifically, by leveraging a sophisticated characterization of sparse modeling structures/assumptions (i.e., local restricted strong convexity and Hessian smoothness), we prove that within each stage of convex relaxation, our proposed algorithm achieves (local) quadratic convergence, and eventually obtains a sparse approximate local optimum with optimal statistical properties after only a few convex relaxations. Numerical experiments are provided to support our theory.

Venue
2017 NIPS
Publication Time
Dec 2017
Authors
Xingguo Li, Lin Yang, Jason Ge, Jarvis Haupt, Tong Zhang , Tuo Zhao