Prof. Dr. Mario Ohlberger

Model order reduction for parameterized partial differential equations is a very active research area that has seen tremendous development in recent years from both theoretical and application perspectives. A particular promising approach is the reduced basis method that relies on the approximation of the solution manifold of a parameterized system by tailored low dimensional approximation spaces that are spanned from suitably selected particular solutions, called snapshots. With speedups that can reach several orders of magnitude, reduced basis methods enable high fidelity real-time simulations for certain problem classes and dramatically reduce the computational costs in many-query applications. While the ”online efficiency” of these model reduction methods is very convincing for problems with a rapid decay of the Kolmogorov n-width, there are still major drawbacks and limitations. Most importantly, the construction of the reduced system in a so called ”offline phase” is extremely CPU-time and memory consuming for large scale systems. For practical applications, it is thus necessary to derive model reduction techniques that do not rely on a classical offline/online splitting but allow for more flexibility in the usage of computational resources. In this talk we focus on learning based reduction methods in the context of PDE constrained optimization and inverse problems and evaluate their overall efficiency. We discuss learning strategies, such as adaptive enrichment as well as a combination of reduced order models with machine learning approaches in the contest of time dependent problems. Concepts of rigorous certification and convergence will be presented, as well as numerical experiments that demonstrate the efficiency of the proposed approaches.

23.05.2024, Raum: MPI-Seminarraum "Prigogine", Zeit: 14:00

Zoom-Link: https://eu02web.zoom-x.de/j/7302508065

Letzte Änderung: 21.05.2024 - Ansprechpartner: Volker Kaibel