Picture for James Liley

James Liley

Ethical considerations of use of hold-out sets in clinical prediction model management

Add code
Jun 05, 2024
Viaarxiv icon

Safe machine learning model release from Trusted Research Environments: The AI-SDC package

Add code
Dec 06, 2022
Viaarxiv icon

GRAIMATTER Green Paper: Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)

Add code
Nov 03, 2022
Figure 1 for GRAIMATTER Green Paper: Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)
Figure 2 for GRAIMATTER Green Paper: Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)
Figure 3 for GRAIMATTER Green Paper: Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)
Figure 4 for GRAIMATTER Green Paper: Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)
Viaarxiv icon

Optimal sizing of a holdout set for safe predictive model updating

Add code
Feb 17, 2022
Figure 1 for Optimal sizing of a holdout set for safe predictive model updating
Figure 2 for Optimal sizing of a holdout set for safe predictive model updating
Figure 3 for Optimal sizing of a holdout set for safe predictive model updating
Figure 4 for Optimal sizing of a holdout set for safe predictive model updating
Viaarxiv icon

Model updating after interventions paradoxically introduces bias

Add code
Oct 22, 2020
Figure 1 for Model updating after interventions paradoxically introduces bias
Figure 2 for Model updating after interventions paradoxically introduces bias
Figure 3 for Model updating after interventions paradoxically introduces bias
Figure 4 for Model updating after interventions paradoxically introduces bias
Viaarxiv icon