EE Systems Seminar
Nonconvex Optimization for High Dimensional Learning: From Phase Retrieval to Submodular Maximization
Abstract: Many problems of contemporary interest in signal processing and machine learning involve highly non-convex optimization problems. While nonconvex optimization problems are known to be intractable in general, simple local search heuristics such as (stochastic) gradient descent are often surprisingly effective at finding global optima on real or randomly generated data. In this talk I will discuss some results explaining the success of these heuristics focusing on two problems.
The first problem, concerns the recovery of a structured signal from under-sampled quadratic measurements. I will show that projected gradient descent on a natural nonconvex formulation finds globally optimal solutions with a near minimal number of samples, breaking through local sample complexity barriers that have emerged in recent literature. I will also discuss how these new mathematical developments pave the way for a new generation of data-driven phaseless imaging systems that can utilize prior information to significantly reduce acquisition time and enhance image reconstruction, enabling nano-scale imaging at unprecedented speeds and resolutions. The second problem, focuses on submodular optimization problems that emerge in a variety of areas in machine learning, including utility functions in active learning and sensing, matrix approximations and network inference. I will show how to cast this highly discrete optimization problem as a continuous nonconvex optimization problem. I will then discuss new local search heuristics based on this continuous formulation that lead to fast algorithms for submodular maximization. Despite the apparent lack of convexity in such functionals I will show that these new local search heuristics lead to approximation guarantees on par (or sometimes better) than state of the art. This talk is based on joint work with collaborators that will be properly introduced during the talk.
Bio: Mahdi Soltanolkotabi is currently an assistant professor in the Ming Hsieh Department of Electrical Engineering at the University of Southern California. Prior to joining USC, he completed his PhD in electrical engineering at Stanford in 2014. He was a postdoctoral researcher in the EECS department at UC Berkeley during the 2014-2015 academic year. His research focuses on the design and mathematical understanding of computationally efficient algorithms for optimization, high dimensional statistics, machine learning, signal processing and computational imaging. Recently, a main focus of his research has been on developing and analyzing algorithms for nonconvex optimization, with provable guarantees of convergence to the global optimum.
Host: Fariborz Salehi
Contact: Katie Pichotta email@example.com