On adversarial training and perimeter minimization problems (N. Garcia Trillos, Department of Statistics, University of Wisconsin-Madison)

Date: 16-02-2022Time: 4:00 PM – 5:00 PMLocation: Online

On adversarial training and perimeter minimization problems

Speaker: Prof. Dr. Nicolas Garcia Trillos
Affiliation: Department of Statistics, University of Wisconsin-Madison,
Website

https://fau.zoom.us/j/67222235882?pwd=MEVnbFhoREdKejJqTzlHQllTaHgzUT09

Meeting ID: 672 2223 5882
Passcode: 167751

Abstract: Adversarial training is a framework widely used by machine
learning practitioners to enforce robustness of learning models. However,
despite the development of several computational strategies for adversarial
training and some theoretical development in the broader distributionally robust
optimization literature, there are several theoretical questions about
adversarial training that remain relatively unexplored. One such question is to
understand, in more precise mathematical terms, the type of regularization
enforced by adversarial training in modern settings like non-parametric
classification as well as classification with deep neural networks. In this
talk, I will present a series of connections between adversarial training and
several problems in the calculus of variations, and geometric measure theory.
These connections reveal a rich geometric structure of adversarial problems and
conceptually all aim at answering the question: what is the regularization
effect induced by adversarial training? In concrete terms, I will discuss, among
other things, an equivalence between a family of adversarial training problems
for non-parametric classification and a family of regularized risk minimization
problems where the regularizer is a nonlocal perimeter functional. In the binary
case, the resulting regularized risk minimization problems admit exact convex
relaxations of the type L^1+ TV, a form frequently studied in image analysis and
graph-based learning. I will highlight how these connections provide novel
theoretical results on robust training of learning models as well as provide a
directly interpretable statistical motivation for a family of regularized risk
minimization problems involving perimeter/total variation. This talk is based on
joint works with Ryan Murray, Camilo A. García Trillos, Leon Bungert, and
Jakwang Kim.

https://en.www.math.fau.de/applied-analysis/

Add to calendar

Event Details

Date:
16-02-2022
Time:
4:00 PM – 5:00 PM
Location:

Online

Event Categories:
Chair in Applied Analysis