Picture for Yasuhiro Fujiwara

Yasuhiro Fujiwara

Meta-learning for Positive-unlabeled Classification

Add code
Jun 06, 2024
Viaarxiv icon

GuP: Fast Subgraph Matching by Guard-based Pruning

Add code
Jun 11, 2023
Viaarxiv icon

Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers

Add code
Mar 14, 2023
Figure 1 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Figure 2 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Figure 3 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Figure 4 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Viaarxiv icon

Few-shot Learning for Unsupervised Feature Selection

Add code
Jul 02, 2021
Figure 1 for Few-shot Learning for Unsupervised Feature Selection
Figure 2 for Few-shot Learning for Unsupervised Feature Selection
Figure 3 for Few-shot Learning for Unsupervised Feature Selection
Figure 4 for Few-shot Learning for Unsupervised Feature Selection
Viaarxiv icon

Meta-Learning for Relative Density-Ratio Estimation

Add code
Jul 02, 2021
Figure 1 for Meta-Learning for Relative Density-Ratio Estimation
Figure 2 for Meta-Learning for Relative Density-Ratio Estimation
Figure 3 for Meta-Learning for Relative Density-Ratio Estimation
Figure 4 for Meta-Learning for Relative Density-Ratio Estimation
Viaarxiv icon

Semi-supervised Anomaly Detection on Attributed Graphs

Add code
Feb 27, 2020
Figure 1 for Semi-supervised Anomaly Detection on Attributed Graphs
Figure 2 for Semi-supervised Anomaly Detection on Attributed Graphs
Figure 3 for Semi-supervised Anomaly Detection on Attributed Graphs
Figure 4 for Semi-supervised Anomaly Detection on Attributed Graphs
Viaarxiv icon

Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks

Add code
Sep 19, 2019
Figure 1 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 2 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 3 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 4 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Viaarxiv icon

Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining

Add code
Jun 10, 2019
Figure 1 for Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining
Figure 2 for Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining
Viaarxiv icon

Sigsoftmax: Reanalysis of the Softmax Bottleneck

Add code
May 28, 2018
Figure 1 for Sigsoftmax: Reanalysis of the Softmax Bottleneck
Figure 2 for Sigsoftmax: Reanalysis of the Softmax Bottleneck
Figure 3 for Sigsoftmax: Reanalysis of the Softmax Bottleneck
Figure 4 for Sigsoftmax: Reanalysis of the Softmax Bottleneck
Viaarxiv icon

Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks

Add code
Sep 28, 2017
Figure 1 for Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Figure 2 for Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Figure 3 for Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Figure 4 for Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Viaarxiv icon