Picture for Yeonjong Shin

Yeonjong Shin

A Comprehensive Review of Latent Space Dynamics Identification Algorithms for Intrusive and Non-Intrusive Reduced-Order-Modeling

Add code
Mar 16, 2024
Viaarxiv icon

tLaSDI: Thermodynamics-informed latent space dynamics identification

Add code
Mar 09, 2024
Figure 1 for tLaSDI: Thermodynamics-informed latent space dynamics identification
Figure 2 for tLaSDI: Thermodynamics-informed latent space dynamics identification
Figure 3 for tLaSDI: Thermodynamics-informed latent space dynamics identification
Figure 4 for tLaSDI: Thermodynamics-informed latent space dynamics identification
Viaarxiv icon

Randomized Forward Mode of Automatic Differentiation for Optimization Algorithms

Add code
Oct 24, 2023
Viaarxiv icon

On the training and generalization of deep operator networks

Add code
Sep 02, 2023
Viaarxiv icon

GFINNs: GENERIC Formalism Informed Neural Networks for Deterministic and Stochastic Dynamical Systems

Add code
Aug 31, 2021
Figure 1 for GFINNs: GENERIC Formalism Informed Neural Networks for Deterministic and Stochastic Dynamical Systems
Figure 2 for GFINNs: GENERIC Formalism Informed Neural Networks for Deterministic and Stochastic Dynamical Systems
Figure 3 for GFINNs: GENERIC Formalism Informed Neural Networks for Deterministic and Stochastic Dynamical Systems
Figure 4 for GFINNs: GENERIC Formalism Informed Neural Networks for Deterministic and Stochastic Dynamical Systems
Viaarxiv icon

Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions

Add code
May 20, 2021
Figure 1 for Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
Figure 2 for Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
Figure 3 for Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
Figure 4 for Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
Viaarxiv icon

A Caputo fractional derivative-based algorithm for optimization

Add code
Apr 06, 2021
Figure 1 for A Caputo fractional derivative-based algorithm for optimization
Figure 2 for A Caputo fractional derivative-based algorithm for optimization
Figure 3 for A Caputo fractional derivative-based algorithm for optimization
Figure 4 for A Caputo fractional derivative-based algorithm for optimization
Viaarxiv icon

Plateau Phenomenon in Gradient Descent Training of ReLU networks: Explanation, Quantification and Avoidance

Add code
Jul 14, 2020
Figure 1 for Plateau Phenomenon in Gradient Descent Training of ReLU networks: Explanation, Quantification and Avoidance
Figure 2 for Plateau Phenomenon in Gradient Descent Training of ReLU networks: Explanation, Quantification and Avoidance
Figure 3 for Plateau Phenomenon in Gradient Descent Training of ReLU networks: Explanation, Quantification and Avoidance
Figure 4 for Plateau Phenomenon in Gradient Descent Training of ReLU networks: Explanation, Quantification and Avoidance
Viaarxiv icon

On the Convergence and generalization of Physics Informed Neural Networks

Add code
Apr 03, 2020
Figure 1 for On the Convergence and generalization of Physics Informed Neural Networks
Figure 2 for On the Convergence and generalization of Physics Informed Neural Networks
Figure 3 for On the Convergence and generalization of Physics Informed Neural Networks
Figure 4 for On the Convergence and generalization of Physics Informed Neural Networks
Viaarxiv icon

Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural Networks

Add code
Oct 14, 2019
Figure 1 for Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural Networks
Figure 2 for Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural Networks
Figure 3 for Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural Networks
Figure 4 for Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural Networks
Viaarxiv icon