skip to main content

H.B. Keller Colloquium

Monday, May 2, 2022
4:00pm to 5:00pm
Add to Cal
Annenberg 105
Optimization for Over-Parameterized Systems and Transition to Linearity in Deep Learning
Mikhail Belkin, Professor, Halicioğlu Data Science Institute, University of California ~ San Diego,

The success of deep learning is due, to a great extent, to the remarkable effectiveness of gradient-based optimization methods applied to large neural networks. In this talk I will discuss some general mathematical principles allowing for efficient optimization in over-parameterized non-linear systems, a setting that includes deep neural networks. I will discuss that optimization problems corresponding to these systems are not convex, even locally, but instead satisfy the Polyak-Lojasiewicz (PL) condition on most of the parameter space, allowing for efficient optimization by gradient descent or SGD. We connect the PL condition of these systems to the condition number associated to the tangent kernel and show how a non-linear theory for those systems parallels classical analyses of over-parameterized linear equations.

In a related but conceptually separate development, I will discuss a new perspective on the remarkable recently discovered phenomenon of transition to linearity (constancy of NTK) in certain classes of large neural networks. I will show how this transition to linearity arises from the scaling of the Hessian with the size of the network.

Combining these ideas yields a clean and general argument for demonstrating the PL condition and convergence for a large class of wide neural networks.

Finally I will comment on systems which are ''almost'' over-parameterized,
which appears to be common in practice.

Joint work with Chaoyue Liu and Libin Zhu

For more information, please contact Diana Bohler by phone at 626-395-1768 or by email at [email protected].