Picture for Sekitoshi Kanai

Sekitoshi Kanai

Evaluating Time-Series Training Dataset through Lens of Spectrum in Deep State Space Models

Add code
Aug 29, 2024
Figure 1 for Evaluating Time-Series Training Dataset through Lens of Spectrum in Deep State Space Models
Figure 2 for Evaluating Time-Series Training Dataset through Lens of Spectrum in Deep State Space Models
Figure 3 for Evaluating Time-Series Training Dataset through Lens of Spectrum in Deep State Space Models
Figure 4 for Evaluating Time-Series Training Dataset through Lens of Spectrum in Deep State Space Models
Viaarxiv icon

Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks

Add code
Mar 15, 2024
Viaarxiv icon

Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff

Add code
Aug 31, 2023
Viaarxiv icon

Regularizing Neural Networks with Meta-Learning Generative Models

Add code
Jul 26, 2023
Viaarxiv icon

Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers

Add code
Mar 14, 2023
Figure 1 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Figure 2 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Figure 3 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Figure 4 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Viaarxiv icon

Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks

Add code
Oct 04, 2022
Figure 1 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Figure 2 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Figure 3 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Figure 4 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Viaarxiv icon

Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness

Add code
Jul 21, 2022
Figure 1 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 2 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 3 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 4 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Viaarxiv icon

Transfer Learning with Pre-trained Conditional Generative Models

Add code
Apr 27, 2022
Figure 1 for Transfer Learning with Pre-trained Conditional Generative Models
Figure 2 for Transfer Learning with Pre-trained Conditional Generative Models
Figure 3 for Transfer Learning with Pre-trained Conditional Generative Models
Figure 4 for Transfer Learning with Pre-trained Conditional Generative Models
Viaarxiv icon

F-Drop&Match: GANs with a Dead Zone in the High-Frequency Domain

Add code
Jun 04, 2021
Figure 1 for F-Drop&Match: GANs with a Dead Zone in the High-Frequency Domain
Figure 2 for F-Drop&Match: GANs with a Dead Zone in the High-Frequency Domain
Figure 3 for F-Drop&Match: GANs with a Dead Zone in the High-Frequency Domain
Figure 4 for F-Drop&Match: GANs with a Dead Zone in the High-Frequency Domain
Viaarxiv icon

Smoothness Analysis of Loss Functions of Adversarial Training

Add code
Mar 02, 2021
Figure 1 for Smoothness Analysis of Loss Functions of Adversarial Training
Figure 2 for Smoothness Analysis of Loss Functions of Adversarial Training
Figure 3 for Smoothness Analysis of Loss Functions of Adversarial Training
Viaarxiv icon