Statistics Department Seminar Series: George Michailidis, Professor, Department of Statistics and Data Science, UCLA
"A Penalty-Based Method for Communication-Efficient Decentralized Bilevel Optimization"
Abstract: Bilevel optimization (BLO) is an optimization problem that involves two levels of hierarchy (i.e., upper and lower levels), wherein obtaining the solution to the upper-level problem requires solving the lower-level one. BLO has recently attracted a lot of attention due to novel applications in machine learning that involve optimizing nested objective functions, including hyperparameter tuning and related automated ML tasks, few shot robust, representation, coreset and reinforcement learning. In the first part of the talk, a very brief overview of the BLO problem, selected machine learning applications that can be cast as BLO problems and algorithms to obtain its solution will be given.
In the second part of the talk, we introduce the decentralized version of the BLO problem (without assuming a star-shape network topology) and present a penalty function-based decentralized algorithm with theoretical guarantees under both convex and non-convex assumptions for the upper level functions. Specifically, a distributed alternating gradient-type algorithm for solving consensus BLO over a decentralized network is developed. A key feature of the proposed algorithm is the estimation of the hyper-gradient of the penalty function through decentralized computation of matrix-vector products and few vector communications. Further, the theoretical results highlight improvements in the iteration complexity of decentralized BLO. Empirical results on selected ML problems demonstrate that the proposed method performs well in real-world settings.
https://georgemichailidis.github.io/
In the second part of the talk, we introduce the decentralized version of the BLO problem (without assuming a star-shape network topology) and present a penalty function-based decentralized algorithm with theoretical guarantees under both convex and non-convex assumptions for the upper level functions. Specifically, a distributed alternating gradient-type algorithm for solving consensus BLO over a decentralized network is developed. A key feature of the proposed algorithm is the estimation of the hyper-gradient of the penalty function through decentralized computation of matrix-vector products and few vector communications. Further, the theoretical results highlight improvements in the iteration complexity of decentralized BLO. Empirical results on selected ML problems demonstrate that the proposed method performs well in real-world settings.
https://georgemichailidis.github.io/
Building: | West Hall |
---|---|
Website: | |
Event Type: | Workshop / Seminar |
Tags: | seminar |
Source: | Happening @ Michigan from Department of Statistics, Department of Statistics Seminar Series |