Theory of Computing Seminar
First-Order Iterative Methods in the Design of Fast Algorithms: from Multiplicative Weight Updates to Nesterov’s Method
Abstract: Fast iterative methods from Convex Optimization play a crucial
role in a number of recent breakthroughs in the design of
nearly-linear-time algorithms for fundamental combinatorial problems,
such as maximum flow and graph partitioning. Multiplicative Weight
Updates, Gradient Descent, and Nesterov's Method have become a mainstay
in the construction and analysis of fast algorithms.
The power of these approaches raises a number of questions: What is the
exact relation between Multiplicative Weight Updates and Gradient
Descent? Why do Multiplicative Weight Updates show up in so many
settings? What is the intuition behind Nesterov's iterative algorithm
that achieves the asymptotically optimal iteration count for smooth
In this survey talk, I will provide answers by presenting a unified
framework that reveals the power and limitations of each method and
provides much needed geometric intuition. Among other insights, I will
explain how Multiplicative Weight Updates are a particular instance of a
dual iterative method, known as Mirror Descent, and how Nesterov's
algorithm can be naturally derived as a combination of Mirror and
As an example of the application of these insights, I will also discuss
their crucial role in recent progress in the nearly-linear-time solution
of packing linear programs, both in parallel and sequentially.
The material in this talk is joint work with Zeyuan Allen-Zhu (MIT).
Contact: Thomas Vidick email@example.com