The course will cover techniques in unconstrained and constrained
convex optimization and a practical introduction to convex duality.
The course will focus on (1) formulating and understanding convex
optimization problems and studying their properties; (2) presenting
and understanding optimization approaches; and (3) understanding the
dual problem. Limited theoretical analysis of convergence properties
of methods will be presented. Examples will be mostly from data
fitting, statistics and machine learning.
Formalization of optimization problems
Smooth unconstrained optimization: Gradient Descent, Conjugate Gradient Descent, Newton and Quasi-Newton methods; Line Search methods
Standard formulations of constrained optimization: Linear, Quadratic, Conic and Semi-Definite Programming
KKT optimality conditions
Lagrangian duality, constraint qualification, weak and strong duality
Fenchel conjugacy and its relationship to Lagrangian duality
Equality constrained Newton method
Log Barrier (Central Path) methods, and Primal-Dual optimization methods
Overview of the simplex method as an example of an active set method.
Overview of the first order oracle model, subgradient and separation oracles, and the Ellipsoid algorithm.
There will be roughly 7-8 weekly homework assignments, counting
toward 50% of the grade. Assignments must be typed (not handwritten)
and submitted electronically in PDF. Collaborative work on the
homeworks is encouraged, but each student must eventually write up the
solution independently. Most assignments also include NumPy programming section.
The remaining 50% of the grade will be based on the final exam.