Picture for Francesco Ventura

Francesco Ventura

Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features

Add code
Jun 12, 2021
Figure 1 for Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features
Figure 2 for Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features
Figure 3 for Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features
Figure 4 for Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features
Viaarxiv icon

What's in the box? Explaining the black-box model through an evaluation of its interpretable features

Add code
Jul 31, 2019
Figure 1 for What's in the box? Explaining the black-box model through an evaluation of its interpretable features
Figure 2 for What's in the box? Explaining the black-box model through an evaluation of its interpretable features
Figure 3 for What's in the box? Explaining the black-box model through an evaluation of its interpretable features
Figure 4 for What's in the box? Explaining the black-box model through an evaluation of its interpretable features
Viaarxiv icon

Automating concept-drift detection by self-evaluating predictive model degradation

Add code
Jul 18, 2019
Figure 1 for Automating concept-drift detection by self-evaluating predictive model degradation
Figure 2 for Automating concept-drift detection by self-evaluating predictive model degradation
Figure 3 for Automating concept-drift detection by self-evaluating predictive model degradation
Figure 4 for Automating concept-drift detection by self-evaluating predictive model degradation
Viaarxiv icon