Tuesday, September 24, 2024
Tutorial Sessions will be held in the UIUC Coordinated Science Laboratory building (1308 W Main St, Urbana, IL 61801) on the lower level, Room B02.
8:30-9:30 am | Registration and Breakfast |
9:30-11:00 am | Sanjay Shakkottai |
11:00-11:15 am | Break |
11:15 am – 12:30 pm | Sanjay Shakkottai |
12:30-2:00 pm | Lunch Break (lunch not provided) |
2:00-3:30 pm | David Gamarnik |
3:30-4:00 pm | Break |
4:00-5:30 pm | David Gamarnik |
6:00-8:00 pm | Welcome Reception |
The Welcome Reception will be in the Electrical and Computer Engineering building (306 N Wright St, Urbana, IL 61801) across from CSL, Room 3002.
Morning Speaker
Speaker: Sanjay Shakkottai
Title: Generative modeling of images using diffusions
Abstract: Diffusion models have emerged as a powerful new approach to generative modeling of images. The tutorial will consist of two parts. In the first part, we will discuss the basic mathematical models and techniques that underlie diffusions. Topics covered will include an overview of stochastic differential equations, the Fokker-Planck equation, forward and reverse processes, learning score functions through Tweedie’s formula, and ODE flow models. In the second part, we will discuss recent developments in diffusion models for posterior sampling. Specifically, we present the first framework that uses pre-trained latent diffusion models to solve linear inverse problems such as random inpainting, block inpainting, denoising, deblurring, destriping, and super-resolution. Next, we present an efficient second-order approximation using Tweedie’s formula to mitigate the bias incurred in the widely used first-order samplers. With this method, we devise a surrogate loss function to refine the reverse process at every diffusion step to address inverse problems and perform high-fidelity text-guided image editing.
Bio: Sanjay Shakkottai received his Ph.D. from the ECE Department at the University of Illinois at Urbana-Champaign in 2002. He is with The University of Texas at Austin, where he is a Professor in the Chandra Family Department of Electrical and Computer Engineering, and holds the Cockrell Family Chair in Engineering #15. He is also the Director of the Center for Generative AI, a campus-wide computing cluster at UT Austin. He received the NSF CAREER award in 2004 and was elected as an IEEE Fellow in 2014. He was a co-recipient of the IEEE Communications Society William R. Bennett Prize in 2021. He has served as the Editor in Chief of IEEE/ACM Transactions on Networking. His research interests lie at the intersection of statistical learning and algorithms for resource allocation, with applications to generative models and wireless communication networks.
Afternoon Speaker
David Gamarnik, Nanyang Technological University Professor of Operations Research at Massachusetts Institute of Technology, Cambridge, MA
Title: Overlap Gap Property: A topological barrier to optimizing over classical and quantum random structures
Abstract: The problem of optimizing over random structures emerges in many areas of mathematics, science, and engineering, including graph theory and network science, statistical physics, high-dimensional statistics, machine learning and artificial intelligence, theoretical computer science, and quantum information science. For many such structures, finding optimal solutions by means of fast algorithms is not known and often is believed not to be possible. At the same time, the formal hardness of these problems in the form of the complexity-theoretic NP-hardness is lacking. The graph alignment problem to be discussed by Massoulié in his talk is a good example of such a problem.
In our tutorial we detail a new approach for understanding algorithmic intractability of these optimization problems. The approach is based on the topological disconnectivity of the space of near optimal solutions, called the Overlap Gap Property (OGP), which traces its origin in the works of Parisi on replica symmetry breaking, and the subsequent mathematically rigorous developments of it by Talagrand. In our tutorial, using mostly elementary tools from discrete probability, we will demonstrate simple proofs of the presence of the OGP and will discuss how OGP a) emerges in most models known to exhibit an apparent algorithmic hardness; b) is consistent with the hardness/tractability phase transition for most of models analyzed to the day; and, importantly, c) allows to rule out large classes of classical and quantum algorithms as potential contenders, specifically classes of algorithms which exhibit input stability (noise insensitivity).
Bio: David Gamarnik is a Nanyang Technological University Professor of Operations Research at the Operations Research and Statistics Group, Sloan School of Management of Massachusetts Institute of Technology. He received B.A. in mathematics from New York University in 1993 and Ph.D. in Operations Research from MIT in 1998. Since then he was a research staff member of IBM T.J. Watson Research Center, before joining MIT in 2005.
His research interests include discrete probability, optimization and algorithms, quantum comput-ing, statistics and machine learning, stochastic processes and queueing theory. He is a fellow of the Institute for Mathematical Statistics, Institute for Operations Research and Management Science, and the American Mathematical Society. He is a recipient of the Erlang Prize and the Best Publication Award from the Applied Probability Society of INFORMS, and was a finalist in Franz Edelman Prize competition of INFORMS. He is a co-author of a textbook on queueing theory. He currently serves as an area editor for the Mathematics of Operations Research journal. In the past he served as an area editor of the Operations Research journal, and as an associate editor of the Mathematics of Operations Research, the Annals of Applied Probability, Queueing Systems and the Stochastic Systems journals.