Picture for Soroosh Baselizadeh

Soroosh Baselizadeh

Towards Semantic Interpretation of Thoracic Disease and COVID-19 Diagnosis Models

Add code
Apr 04, 2021
Figure 1 for Towards Semantic Interpretation of Thoracic Disease and COVID-19 Diagnosis Models
Figure 2 for Towards Semantic Interpretation of Thoracic Disease and COVID-19 Diagnosis Models
Figure 3 for Towards Semantic Interpretation of Thoracic Disease and COVID-19 Diagnosis Models
Figure 4 for Towards Semantic Interpretation of Thoracic Disease and COVID-19 Diagnosis Models
Viaarxiv icon

Neural Response Interpretation through the Lens of Critical Pathways

Add code
Mar 31, 2021
Figure 1 for Neural Response Interpretation through the Lens of Critical Pathways
Figure 2 for Neural Response Interpretation through the Lens of Critical Pathways
Figure 3 for Neural Response Interpretation through the Lens of Critical Pathways
Figure 4 for Neural Response Interpretation through the Lens of Critical Pathways
Viaarxiv icon

Rethinking Positive Aggregation and Propagation of Gradients in Gradient-based Saliency Methods

Add code
Dec 01, 2020
Figure 1 for Rethinking Positive Aggregation and Propagation of Gradients in Gradient-based Saliency Methods
Figure 2 for Rethinking Positive Aggregation and Propagation of Gradients in Gradient-based Saliency Methods
Figure 3 for Rethinking Positive Aggregation and Propagation of Gradients in Gradient-based Saliency Methods
Figure 4 for Rethinking Positive Aggregation and Propagation of Gradients in Gradient-based Saliency Methods
Viaarxiv icon

Multiresolution Knowledge Distillation for Anomaly Detection

Add code
Nov 22, 2020
Figure 1 for Multiresolution Knowledge Distillation for Anomaly Detection
Figure 2 for Multiresolution Knowledge Distillation for Anomaly Detection
Figure 3 for Multiresolution Knowledge Distillation for Anomaly Detection
Figure 4 for Multiresolution Knowledge Distillation for Anomaly Detection
Viaarxiv icon

Explaining Neural Networks via Perturbing Important Learned Features

Add code
Nov 25, 2019
Figure 1 for Explaining Neural Networks via Perturbing Important Learned Features
Figure 2 for Explaining Neural Networks via Perturbing Important Learned Features
Figure 3 for Explaining Neural Networks via Perturbing Important Learned Features
Figure 4 for Explaining Neural Networks via Perturbing Important Learned Features
Viaarxiv icon