Picture for Hyo-Eun Kim

Hyo-Eun Kim

Photometric Transformer Networks and Label Adjustment for Breast Density Prediction

Add code
May 08, 2019
Figure 1 for Photometric Transformer Networks and Label Adjustment for Breast Density Prediction
Figure 2 for Photometric Transformer Networks and Label Adjustment for Breast Density Prediction
Figure 3 for Photometric Transformer Networks and Label Adjustment for Breast Density Prediction
Figure 4 for Photometric Transformer Networks and Label Adjustment for Breast Density Prediction
Viaarxiv icon

SRM : A Style-based Recalibration Module for Convolutional Neural Networks

Add code
Mar 26, 2019
Figure 1 for SRM : A Style-based Recalibration Module for Convolutional Neural Networks
Figure 2 for SRM : A Style-based Recalibration Module for Convolutional Neural Networks
Figure 3 for SRM : A Style-based Recalibration Module for Convolutional Neural Networks
Figure 4 for SRM : A Style-based Recalibration Module for Convolutional Neural Networks
Viaarxiv icon

Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks

Add code
Oct 17, 2018
Figure 1 for Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks
Figure 2 for Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks
Figure 3 for Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks
Figure 4 for Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks
Viaarxiv icon

Keep and Learn: Continual Learning by Constraining the Latent Space for Knowledge Preservation in Neural Networks

Add code
May 28, 2018
Figure 1 for Keep and Learn: Continual Learning by Constraining the Latent Space for Knowledge Preservation in Neural Networks
Figure 2 for Keep and Learn: Continual Learning by Constraining the Latent Space for Knowledge Preservation in Neural Networks
Figure 3 for Keep and Learn: Continual Learning by Constraining the Latent Space for Knowledge Preservation in Neural Networks
Figure 4 for Keep and Learn: Continual Learning by Constraining the Latent Space for Knowledge Preservation in Neural Networks
Viaarxiv icon

Semantic Noise Modeling for Better Representation Learning

Add code
Nov 04, 2016
Figure 1 for Semantic Noise Modeling for Better Representation Learning
Figure 2 for Semantic Noise Modeling for Better Representation Learning
Figure 3 for Semantic Noise Modeling for Better Representation Learning
Figure 4 for Semantic Noise Modeling for Better Representation Learning
Viaarxiv icon

Deconvolutional Feature Stacking for Weakly-Supervised Semantic Segmentation

Add code
Mar 12, 2016
Figure 1 for Deconvolutional Feature Stacking for Weakly-Supervised Semantic Segmentation
Figure 2 for Deconvolutional Feature Stacking for Weakly-Supervised Semantic Segmentation
Figure 3 for Deconvolutional Feature Stacking for Weakly-Supervised Semantic Segmentation
Figure 4 for Deconvolutional Feature Stacking for Weakly-Supervised Semantic Segmentation
Viaarxiv icon

Self-Transfer Learning for Fully Weakly Supervised Object Localization

Add code
Feb 04, 2016
Figure 1 for Self-Transfer Learning for Fully Weakly Supervised Object Localization
Figure 2 for Self-Transfer Learning for Fully Weakly Supervised Object Localization
Figure 3 for Self-Transfer Learning for Fully Weakly Supervised Object Localization
Figure 4 for Self-Transfer Learning for Fully Weakly Supervised Object Localization
Viaarxiv icon