Picture for Stanley J. Osher

Stanley J. Osher

A Primal-Dual Framework for Transformers and Neural Networks

Add code
Jun 19, 2024
Viaarxiv icon

Wasserstein proximal operators describe score-based generative models and resolve memorization

Add code
Feb 09, 2024
Viaarxiv icon

PDE Generalization of In-Context Operator Networks: A Study on 1D Scalar Nonlinear Conservation Laws

Add code
Jan 21, 2024
Viaarxiv icon

Prompting In-Context Operator Learning with Sensor Data, Equations, and Natural Language

Add code
Aug 09, 2023
Viaarxiv icon

In-Context Operator Learning for Differential Equation Problems

Add code
Apr 17, 2023
Viaarxiv icon

Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization

Add code
Aug 01, 2022
Figure 1 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 2 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 3 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 4 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Viaarxiv icon

Multi-Agent Shape Control with Optimal Transport

Add code
Jun 30, 2022
Figure 1 for Multi-Agent Shape Control with Optimal Transport
Figure 2 for Multi-Agent Shape Control with Optimal Transport
Figure 3 for Multi-Agent Shape Control with Optimal Transport
Figure 4 for Multi-Agent Shape Control with Optimal Transport
Viaarxiv icon

Transformer with Fourier Integral Attentions

Add code
Jun 01, 2022
Figure 1 for Transformer with Fourier Integral Attentions
Figure 2 for Transformer with Fourier Integral Attentions
Figure 3 for Transformer with Fourier Integral Attentions
Figure 4 for Transformer with Fourier Integral Attentions
Viaarxiv icon

Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs

Add code
Apr 19, 2022
Figure 1 for Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs
Figure 2 for Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs
Figure 3 for Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs
Figure 4 for Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs
Viaarxiv icon

Transformer with a Mixture of Gaussian Keys

Add code
Oct 16, 2021
Figure 1 for Transformer with a Mixture of Gaussian Keys
Figure 2 for Transformer with a Mixture of Gaussian Keys
Figure 3 for Transformer with a Mixture of Gaussian Keys
Figure 4 for Transformer with a Mixture of Gaussian Keys
Viaarxiv icon