Picture for Aya Abdelsalam Ismail

Aya Abdelsalam Ismail

Concept Bottleneck Language Models For protein design

Add code
Nov 09, 2024
Viaarxiv icon

Interpretable Mixture of Experts for Structured Data

Add code
Jun 05, 2022
Figure 1 for Interpretable Mixture of Experts for Structured Data
Figure 2 for Interpretable Mixture of Experts for Structured Data
Figure 3 for Interpretable Mixture of Experts for Structured Data
Figure 4 for Interpretable Mixture of Experts for Structured Data
Viaarxiv icon

Improving Deep Learning Interpretability by Saliency Guided Training

Add code
Nov 29, 2021
Figure 1 for Improving Deep Learning Interpretability by Saliency Guided Training
Figure 2 for Improving Deep Learning Interpretability by Saliency Guided Training
Figure 3 for Improving Deep Learning Interpretability by Saliency Guided Training
Figure 4 for Improving Deep Learning Interpretability by Saliency Guided Training
Viaarxiv icon

Improving Multimodal Accuracy Through Modality Pre-training and Attention

Add code
Nov 11, 2020
Figure 1 for Improving Multimodal Accuracy Through Modality Pre-training and Attention
Figure 2 for Improving Multimodal Accuracy Through Modality Pre-training and Attention
Figure 3 for Improving Multimodal Accuracy Through Modality Pre-training and Attention
Figure 4 for Improving Multimodal Accuracy Through Modality Pre-training and Attention
Viaarxiv icon

Benchmarking Deep Learning Interpretability in Time Series Predictions

Add code
Oct 26, 2020
Figure 1 for Benchmarking Deep Learning Interpretability in Time Series Predictions
Figure 2 for Benchmarking Deep Learning Interpretability in Time Series Predictions
Figure 3 for Benchmarking Deep Learning Interpretability in Time Series Predictions
Figure 4 for Benchmarking Deep Learning Interpretability in Time Series Predictions
Viaarxiv icon

Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks

Add code
Oct 27, 2019
Figure 1 for Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Figure 2 for Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Figure 3 for Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Figure 4 for Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Viaarxiv icon

Improving Long-Horizon Forecasts with Expectation-Biased LSTM Networks

Add code
Apr 18, 2018
Figure 1 for Improving Long-Horizon Forecasts with Expectation-Biased LSTM Networks
Figure 2 for Improving Long-Horizon Forecasts with Expectation-Biased LSTM Networks
Figure 3 for Improving Long-Horizon Forecasts with Expectation-Biased LSTM Networks
Figure 4 for Improving Long-Horizon Forecasts with Expectation-Biased LSTM Networks
Viaarxiv icon