Professor Adrian Lewis wins John von Neumann Prize
"This prize represents a pinnacle of my career," Lewis said. "It is a wonderful honor from my scientific community, one that has left me flabbergasted...”
The 2020 INFORMS John von Neumann Theory Prize is awarded to Adrian S. Lewis, the Samuel B. Eckert Professor of Engineering in the School of Operations Research and Information Engineering at Cornell, “for his fundamental and sustained contributions to continuous optimization, operations research, and, more broadly, computational science. His work has pushed the frontiers of nonlinear optimization and convex analysis and developed path-breaking theory that has led to much subsequent work.”
"This prize represents a pinnacle of my career," Lewis said. "It is a wonderful honor from my scientific community, one that has left me flabbergasted...”
"I do enjoy crafting an intellectual story. For me, this prize honors a narrative of exciting ideas, blending variational themes and computation, ideas that I was lucky enough to bounce around with many extraordinary co-authors,” said Lewis. “Among them, my early mentor Jon Borwein first taught me about the fundamental role of metric regularity in understanding sensitivity analysis. I wove Jon's lesson together with ideas of problem robustness, nonsmooth calculus and geometry, and algorithmic speed, in work with Asen Dontchev and Terry Rockafellar (the 1999 prize-winner) that grew into my 2014 talk for the International Congress of Mathematicians."
Lewis describes how three particularly enjoyable collaborations have dominated his thinking. First, over two decades, Jim Burke and Michael Overton have taught him the interplay between classical matrix analysis, contemporary optimization, systems control, and computational methods. “A single picture (‘spectrum’) serves to illustrate—a quasi-Newton spectrum steadily separating while successfully running a classical smooth algorithm—due in part to 2017 prize-winner Don Goldfarb—on a nonsmooth eigenvalue optimization problem to which, seemingly, it should not apply,” says Lewis. “We still cannot explain this picture.”
“Secondly, back in 2004, two gifted European postdoctoral visitors chanced by my office simultaneously,” Lewis said. “Jerome Bolte, Aris Daniilidis, and I confidently predicted eternal obscurity for the resulting study on the Kurdyka-Lojasiewicz inequality—a type of error bound, measuring success in concrete nonsmooth optimization problems. To our happy surprise, this work has grown into a cornerstone of convergence analysis in optimization for machine learning.”
“Lastly, here in my ORIE home, I like to explain, tongue only slightly in cheek, that Dima Drusvyatskiy began his Cornell life in 2008 as my student but ended as my advisor,” Lewis continued. “Dima initially followed my interests (partly with Steve Wright) in the foundations of active set methods in optimization. These techniques date back to the simplex method of George Dantzig, the inaugural prizewinner in 1975. Dima and I quickly moved on to a fruitful collaboration with Alex Ioffe that included a breakthrough in our understanding of how nonconvex alternating projection algorithms converge. Over the last century, researchers regularly rediscover this intuitive technique—see the second picture (‘transversal’). Those rediscoveries range from popular contemporary applications in the big data arena all the way back to 1933 when the discoverer was none other than John von Neumann himself.”
"Professor Lewis has published seminal work on a wide range of topics including eigenvalue optimization, quasi-Newton algorithms, gradient sampling methods and control, activity identification via partial smoothness, alternating projection methods, conditioning and error bounds, semi-algebraic variational analysis and the Kurdyka-Lojasiewicz inequality, and hyperbolic polynomials,” said Pinar Kesknocak, INFORMS president, and Asuman Ozdaglar, INFORMS acting prize committee chair, in a written statement. “The clarity and elegance of his writing is well-known and admired. Through scholarly papers, research monographs, and mentorship, he has influenced several generations of optimization researchers, as well as practitioners.”
“His results on convex analysis over Hermitian matrices opened the door to the subdifferential analysis of such functions, as well as to a duality and sensitivity theory for optimization problems with such functions,” they continued. “Together with Burke and Overton, he produced a series of papers leading to a deep understanding of the variational behavior of spectral functions, including the spectral radius. His convergence guarantees for alternating/cyclic projection methods, both for convex and nonconvex settings, are used to find a point at the intersection of finitely many sets, a prototypical problem in computational mathematics. “
“A consistent theme in Professor Lewis's work is to bring variational analytic tools and computation closer together. For example, his recent paper, with Drusvyatskiy and Ioffe, proves that under a natural transversality condition, described in variational analytic terms, the method of alternating projections converges linearly locally,” Kesknocak and Ozdaglar concluded. “His more recent work has focused on understanding the impact of variational analytic notions of stability on linear/quadratic rates of convergence of Gauss-Newton type methods for minimizing compositions of convex functions and smooth maps. These results have implications for a number of fundamental problems including phase retrieval, matrix factorization, and robust principal component analysis."