AG Mathematics of Deep Learning – Andreas Bärmann / Kevin Aigner (FAU Erlangen-Nürnberg)

Date: 06-07-2021Time: 12:15 PM – 1:45 PMLocation: Online

Title: Online Learning for Optimization Problems with Unknown or Uncertain Cost Functions

Abstract: The first part of the talk begins by recapitulating several basic algorithms and results in online learning, in particular the multiplicative weights method and online gradient descent. Based on these algorithms, we demonstrate how to learn the objective function of a decision-maker while only observing the problem input data and the decision-maker’s corresponding decisions over multiple rounds. Our approach works for linear objectives over arbitrary feasible sets for which we have a linear optimization oracle. The two exact algorithms we present – based on multiplicative weights updates and online gradient descent respectively – converge at a rate of O(1/sqrt(T)) and thus allow taking decisions which are essentially as good as those of the observed decision-maker already after relatively few observations. We show the effectiveness and possible applications of our methods in a broad computational study. This is joint work with Alexander Martin, Sebastian Pokutta and Oskar Schneider.
In the second part of the talk, we consider the robust treatment of stochastic optimization problems involving random vectors with unknown discrete probability distributions. With this problem class, we demonstrate the basic concepts of data-driven optimization under uncertainty. Furthermore, we introduce a new iterative approach that uses scenario observations to learn more about the uncertainty over time. This means our solutions become less and less conservative, interpolating between distributionally robust and stochastic optimization. We achieve this by solving the distributionally robust optimization problem over time via an online-learning approach while iteratively updating the ambiguity sets. We provide a regret bound for the quality of the obtained solutions that converges at a rate of O(log(T)/T) and illustrate the effectiveness of our procedure by numerical experiments. Our proposed algorithm is able to solve the online learning problem significantly faster than equivalent reformulations. This is joint work with Kristin Braun, Frauke Liers, Sebastian Pokutta, Oskar Schneider, Kartikey Sharma and Sebastian Tschuppik.

Add to calendar

Event Details

Date:
06-07-2021
Time:
12:15 PM – 1:45 PM
Location:

Online

Event Categories:
Invprob-Vortraege