BEGIN:VCALENDAR VERSION:2.0 PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4// BEGIN:VEVENT UID:20251107T004647EST-76216n38vN@132.216.98.100 DTSTAMP:20251107T054647Z DESCRIPTION:Title: Resolving the Mixing Time of the Langevin Algorithm to i ts Stationary Distribution for Log-Concave Sampling\n\nAbstract:\n\nSampli ng from a high-dimensional distribution is a fundamental task in statistic s\, engineering\, and the sciences. A canonical approach is the Langevin A lgorithm\, i.e.\, the Markov chain for the discretized Langevin Diffusion. This is the sampling analog of Gradient Descent. De- spite being studied for several decades in multiple communities\, tight mixing bounds for this algorithm remain unresolved even in the seemingly simple setting of log-c oncave distributions over a bounded domain. This paper completely characte rizes the mixing time of the Langevin Algorithm to its stationary distribu tion in this setting (and others). This mixing result can be combined with any bound on the discretization bias in order to sample from the stationa ry distribution of the continuous Langevin Diffusion. In this way\, we dis entangle the study of the mixing and bias of the Langevin Algorithm.\n Our key insight is to introduce a technique from the differential privacy lite rature to the sampling literature. This technique\, called Privacy Amplifi cation by Iteration\, uses as a potential a variant of Renyi divergence th at is made geometrically aware via Optimal Transport smoothing. This gives a short\, simple proof of optimal mixing bounds and has several additiona l appealing properties. First\, our approach removes all unnecessary assum ptions required by other sampling analyses. Second\, our approach unifies many settings: it extends unchanged if the Langevin Algorithm uses project ions\, stochastic mini-batch gradients\, or strongly convex potentials (wh ereby our mixing time improves exponentially). Third\, our approach exploi ts convexity only through the contractivity of a gradient step—reminiscent of how convexity is used in textbook proofs of Gradient Descent. In this way\, we offer a new approach towards further unifying the analyses of opt imization and sampling algorithms.\n\nReferences:\n\n• https://arxiv.org/a bs/2210.08448\n\n \n DTSTART:20231108T180000Z DTEND:20231108T190000Z LOCATION:Room 1214\, Burnside Hall\, CA\, QC\, Montreal\, H3A 0B9\, 805 rue Sherbrooke Ouest SUMMARY:Alexandra Kravchuk (9IÖÆ×÷³§Ãâ·Ñ) URL:/mathstat/channels/event/alexandra-kravchuk-mcgill -university-352487 END:VEVENT END:VCALENDAR