Picture for Dónal Landers

Dónal Landers

digital Experimental Cancer Medicine Team, Cancer Biomarker Centre, CRUK Manchester Institute, University of Manchester

Active entailment encoding for explanation tree construction using parsimonious generation of hard negatives

Add code
Aug 02, 2022
Figure 1 for Active entailment encoding for explanation tree construction using parsimonious generation of hard negatives
Figure 2 for Active entailment encoding for explanation tree construction using parsimonious generation of hard negatives
Figure 3 for Active entailment encoding for explanation tree construction using parsimonious generation of hard negatives
Figure 4 for Active entailment encoding for explanation tree construction using parsimonious generation of hard negatives
Viaarxiv icon

Biologically-informed deep learning models for cancer: fundamental trends for encoding and interpreting oncology data

Add code
Jul 02, 2022
Figure 1 for Biologically-informed deep learning models for cancer: fundamental trends for encoding and interpreting oncology data
Figure 2 for Biologically-informed deep learning models for cancer: fundamental trends for encoding and interpreting oncology data
Figure 3 for Biologically-informed deep learning models for cancer: fundamental trends for encoding and interpreting oncology data
Figure 4 for Biologically-informed deep learning models for cancer: fundamental trends for encoding and interpreting oncology data
Viaarxiv icon

Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making

Add code
Apr 13, 2022
Figure 1 for Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making
Figure 2 for Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making
Figure 3 for Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making
Figure 4 for Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making
Viaarxiv icon

Transformers and the representation of biomedical background knowledge

Add code
Feb 04, 2022
Figure 1 for Transformers and the representation of biomedical background knowledge
Figure 2 for Transformers and the representation of biomedical background knowledge
Figure 3 for Transformers and the representation of biomedical background knowledge
Figure 4 for Transformers and the representation of biomedical background knowledge
Viaarxiv icon