Picture for Arnulf Jentzen

Arnulf Jentzen

Non-convergence to global minimizers in data driven supervised deep learning: Adam and stochastic gradient descent optimization provably fail to converge to global minimizers in the training of deep neural networks with ReLU activation

Add code
Oct 14, 2024
Viaarxiv icon

An Overview on Machine Learning Methods for Partial Differential Equations: from Physics Informed Neural Networks to Deep Operator Learning

Add code
Aug 23, 2024
Viaarxiv icon

Convergence rates for the Adam optimizer

Add code
Jul 29, 2024
Viaarxiv icon

Non-convergence of Adam and other adaptive stochastic gradient descent optimization methods for non-vanishing learning rates

Add code
Jul 11, 2024
Viaarxiv icon

Learning rate adaptive stochastic gradient descent optimization methods: numerical simulations for deep learning methods for partial differential equations and convergence analyses

Add code
Jun 20, 2024
Viaarxiv icon

Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks

Add code
Feb 07, 2024
Viaarxiv icon

Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory

Add code
Oct 31, 2023
Viaarxiv icon

Deep neural networks with ReLU, leaky ReLU, and softplus activation provably overcome the curse of dimensionality for Kolmogorov partial differential equations with Lipschitz nonlinearities in the $L^p$-sense

Add code
Sep 24, 2023
Viaarxiv icon

On the existence of minimizers in shallow residual ReLU neural network optimization landscapes

Add code
Feb 28, 2023
Viaarxiv icon

Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations

Add code
Feb 07, 2023
Viaarxiv icon