BEGIN:VCALENDAR VERSION:2.0 PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4// BEGIN:VEVENT UID:20250709T193845EDT-1408UzvXGe@132.216.98.100 DTSTAMP:20250709T233845Z DESCRIPTION: \n\nTitle: Algorithms for stochastic nonconvex and nonsmooth o ptimization\n \n Abstract: Nonsmooth and nonconvex loss functions are often used to model physical phenomena\, provide robustness\, and improve stabil ity. While convergence guarantees in the smooth\, convex settings are well -documented\, algorithms for solving large-scale nonsmooth and nonconvex p roblems remain in their infancy.\n \n I will begin by isolating a class of n onsmooth and nonconvex functions that can be used to model a variety of st atistical and signal processing tasks. Standard statistical assumptions on such inverse problems often endow the optimization formulation with an ap pealing regularity condition: the objective grows sharply away from the so lution set. We show that under such regularity\, a variety of simple algor ithms\, subgradient and Gauss Newton-like methods\, converge rapidly when initialized within constant relative error of the optimal solution. We ill ustrate the theory and algorithms on the real phase retrieval problem\, an d survey a number of other applications\, including blind deconvolution an d covariance matrix estimation.\n \n One of the main advantages of smooth op timization over its nonsmooth counterpart is the potential to use a line-s earch for improved numerical performance. A long-standing open question is to design a line-search procedure in the stochastic setting. In the secon d part of the talk\, I will present a practical line-search method for smo oth stochastic optimization that has rigorous convergence guarantees and r equires only knowable quantities for implementation. While traditional lin e-search methods rely on exact computations of the gradient and function v alues\, our method assumes that these values are available up to some dyna mically adjusted accuracy that holds with some sufficiently high\, but fix ed\, probability. We show that the expected number of iterations to reach an approximate-stationary point matches the worst-case efficiency of typic al first-order methods\, while for convex and strongly convex objectives i t achieves the rates of deterministic gradient descent.\n DTSTART:20190204T210000Z DTEND:20190204T220000Z LOCATION:Room 1104\, Burnside Hall\, CA\, QC\, Montreal\, H3A 0B9\, 805 rue Sherbrooke Ouest SUMMARY:Courtney Paquette - University of Waterloo URL:/mathstat/channels/event/courtney-paquette-univers ity-waterloo-293898 END:VEVENT END:VCALENDAR