Picture for Yasutoshi Ida

Yasutoshi Ida

Evaluating Time-Series Training Dataset through Lens of Spectrum in Deep State Space Models

Add code
Aug 29, 2024
Viaarxiv icon

Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers

Add code
Mar 14, 2023
Viaarxiv icon

Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks

Add code
Oct 04, 2022
Figure 1 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Figure 2 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Figure 3 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Figure 4 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Viaarxiv icon

Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness

Add code
Jul 21, 2022
Figure 1 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 2 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 3 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 4 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Viaarxiv icon

Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks

Add code
May 31, 2022
Figure 1 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Figure 2 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Figure 3 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Figure 4 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Viaarxiv icon

Pruning Randomly Initialized Neural Networks with Iterative Randomization

Add code
Jun 17, 2021
Figure 1 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Figure 2 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Figure 3 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Figure 4 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Viaarxiv icon

Smoothness Analysis of Loss Functions of Adversarial Training

Add code
Mar 02, 2021
Figure 1 for Smoothness Analysis of Loss Functions of Adversarial Training
Figure 2 for Smoothness Analysis of Loss Functions of Adversarial Training
Figure 3 for Smoothness Analysis of Loss Functions of Adversarial Training
Viaarxiv icon

Constraining Logits by Bounded Function for Adversarial Robustness

Add code
Oct 06, 2020
Figure 1 for Constraining Logits by Bounded Function for Adversarial Robustness
Figure 2 for Constraining Logits by Bounded Function for Adversarial Robustness
Figure 3 for Constraining Logits by Bounded Function for Adversarial Robustness
Figure 4 for Constraining Logits by Bounded Function for Adversarial Robustness
Viaarxiv icon

Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks

Add code
Sep 19, 2019
Figure 1 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 2 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 3 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 4 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Viaarxiv icon

Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining

Add code
Jun 10, 2019
Figure 1 for Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining
Figure 2 for Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining
Viaarxiv icon