ORIE Colloquium: Salar Fattahi (Michigan)

to

Location

Frank H. T. Rhodes Hall 253

Description

Finding the needle in the haystack: How gradient descent converges to the right solution in the face of many local minima

In contemporary machine learning, realistic models exhibit increasing nonconvexity and overwhelming overparameterization. This nonconvex nature often leads to numerous undesirable or "spurious" local solutions, while overparameterization exacerbates the risk of overfitting. Yet, simple “short-sighted” algorithms, such as gradient (GD) descent or its variants, often find the needle in the haystack: They converge to desirable solutions while avoiding undesirable solutions and overfitting. This talk delves into explaining the desirable performance of GD-based algorithms by studying their fine-grained trajectory on a class of factorized models, spanning from low-rank models to deep neural networks. On the negative side, we show that even the simplest factorized models have spurious solutions. However, on the positive side, we show that GD-based algorithms are oblivious to such “problematic” solutions.
 

Bio:
Salar Fattahi is an assistant professor in the Department of Industrial and Operations Engineering at the University of Michigan. He received his Ph.D. from UC Berkeley in 2020. He has been the recipient of a National Science Foundation CAREER Award and the North Campus Deans’ MLK Spirit Award. His research has been recognized by multiple awards, including the 2023 INFORMS Junior Faculty Interest Group Best Paper Award (second place), the 2023 and 2018 INFORMS Data Mining Best Student Paper Award (finalist and winner), and the 2022 INFORMS Computing Society Best Student Paper Award (runner up). He has served as an area chair for several premier conferences, including NeurIPS, ICLR, and AISTATS. His research is currently being supported by two grants from the National Science Foundation and one from the Office of Naval Research.