Machine Learning

Optimization approaches for machine learning problems

Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today’s machine learning models call for the reassessment of existing assumptions, such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods.

It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods.

Selected Publications

  • Meisam Razaviyayn, Mingyi Hong, Zhi-Quan Luo and Jong-Shi Pang, “Parallel Successive Convex Approximation for Nonsmooth Nonconvex Optimization”, Proc. NIPS 2014
  • Ruoyu Sun* and Mingyi Hong, “Improved Iteration Complexity Bounds of Cyclic Block Coordinate Descent for Convex Problems”, Proc. NIPS 2015 ( equal contribution)
  • Davood Hajinezhad, Mingyi Hong, Tuo Zhao, and Zhaoran Wang. “NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization.”