VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering

Add code
Sep 27, 2021
Figure 1 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Figure 2 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Figure 3 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Figure 4 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: