Picture for Ikko Yamane

Ikko Yamane

Scalable and hyper-parameter-free non-parametric covariate shift adaptation with conditional sampling

Add code
Dec 15, 2023
Viaarxiv icon

Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification

Add code
Feb 01, 2022
Figure 1 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 2 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 3 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 4 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Viaarxiv icon

Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences

Add code
Jul 16, 2021
Figure 1 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Figure 2 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Figure 3 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Figure 4 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Viaarxiv icon

A One-step Approach to Covariate Shift Adaptation

Add code
Jul 08, 2020
Figure 1 for A One-step Approach to Covariate Shift Adaptation
Figure 2 for A One-step Approach to Covariate Shift Adaptation
Figure 3 for A One-step Approach to Covariate Shift Adaptation
Viaarxiv icon

Do We Need Zero Training Loss After Achieving Zero Training Error?

Add code
Feb 20, 2020
Figure 1 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 2 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 3 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 4 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Viaarxiv icon

Uplift Modeling from Separate Labels

Add code
Oct 01, 2018
Figure 1 for Uplift Modeling from Separate Labels
Figure 2 for Uplift Modeling from Separate Labels
Viaarxiv icon

Regularized Multi-Task Learning for Multi-Dimensional Log-Density Gradient Estimation

Add code
Aug 01, 2015
Viaarxiv icon