After taking a convex optimization class this past semester I implemented a few basic algorithms for unconstrained optimization (e.g. Nesterov’s accelerated gradient descent) in Python in this repo: https://github.com/idc9/optimization_algos.

The purpose of this repo is for me to learn and to have bare bones implementations of these algorithms sitting around. I tried to make the code modular and simple as possible so that you (or a future me) can modify it for other purposes (e.g. add bells and whistles, implement other algorithms, etc). While off the shelf solvers such as sklean or cvxopt are preferable for many applications there are times when you want full control over the solver.

Right now the repo focuses on first order methods (GD, SGD, accelerated GD, etc) for empirical risk minimization problems. For some useful introductory references see:

A few more interesting references: