2. Optimization Theory#
In this chapter, we start the discussion about the classical optimization theory. We will start with unconstrained optimization problem, discuss basic optimality conditions, classical algorithms such as gradient descent, together with many related algorithms such as coordinate descent, Newton method, Nesterov’s accelerated algorithm, among others. Then we will move to problems with constraints, and again discuss key properties of this class of problems and related methods. Throughout this chapter, we will use simple machine learning problems as examples to illustrate the concepts and utilities of different algorithms to be introduced.