Picture for Takashi Furuya

Takashi Furuya

Can neural operators always be continuously discretized?

Add code
Dec 04, 2024
Figure 1 for Can neural operators always be continuously discretized?
Viaarxiv icon

Simultaneously Solving FBSDEs with Neural Operators of Logarithmic Depth, Constant Width, and Sub-Linear Rank

Add code
Oct 18, 2024
Figure 1 for Simultaneously Solving FBSDEs with Neural Operators of Logarithmic Depth, Constant Width, and Sub-Linear Rank
Figure 2 for Simultaneously Solving FBSDEs with Neural Operators of Logarithmic Depth, Constant Width, and Sub-Linear Rank
Viaarxiv icon

Quantitative Approximation for Neural Operators in Nonlinear Parabolic Equations

Add code
Oct 03, 2024
Viaarxiv icon

Transformers are Universal In-context Learners

Add code
Aug 02, 2024
Viaarxiv icon

Mixture of Experts Soften the Curse of Dimensionality in Operator Learning

Add code
Apr 13, 2024
Viaarxiv icon

Breaking the Curse of Dimensionality with Distributed Neural Computation

Add code
Feb 05, 2024
Viaarxiv icon

Convergences for Minimax Optimization Problems over Infinite-Dimensional Spaces Towards Stability in Adversarial Training

Add code
Dec 02, 2023
Viaarxiv icon

Globally injective and bijective neural operators

Add code
Jun 06, 2023
Viaarxiv icon

Fine-tuning Neural-Operator architectures for training and generalization

Add code
Jan 27, 2023
Viaarxiv icon

Variational Inference with Gaussian Mixture by Entropy Approximation

Add code
Feb 26, 2022
Figure 1 for Variational Inference with Gaussian Mixture by Entropy Approximation
Figure 2 for Variational Inference with Gaussian Mixture by Entropy Approximation
Figure 3 for Variational Inference with Gaussian Mixture by Entropy Approximation
Viaarxiv icon