Mert Gurbuzbalaban, Rutgers University
Langevin algorithms, integral to Markov Chain Monte Carlo methods, are crucial in machine learning, particularly for Bayesian inference in high-dimensional models and addressing challenges in stochastic non-convex optimization prevalent in deep learning. This talk delves into the practical aspects of stochastic Langevin algorithms through three illuminating examples. First, it explores their role in non-convex optimization, focusing on their efficacy in navigating complex landscapes. The discussion then extends to decentralized Langevin algorithms, emphasizing their relevance in distributed optimization scenarios, where data is dispersed across multiple sources. Lastly, the focus shifts to constrained sampling, aiming to sample from a target distribution subject to constraints. In each scenario, we introduce new algorithms with convergence guarantees and showcase their performance and scalability to large datasets through numerical examples.