Picture for Preksha Nema

Preksha Nema

STOAT: Structured Data to Analytical Text With Controls

Add code
May 19, 2023
Viaarxiv icon

T-STAR: Truthful Style Transfer using AMR Graph as Intermediate Representation

Add code
Dec 03, 2022
Viaarxiv icon

A Framework for Rationale Extraction for Deep QA models

Add code
Oct 09, 2021
Figure 1 for A Framework for Rationale Extraction for Deep QA models
Figure 2 for A Framework for Rationale Extraction for Deep QA models
Figure 3 for A Framework for Rationale Extraction for Deep QA models
Figure 4 for A Framework for Rationale Extraction for Deep QA models
Viaarxiv icon

The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT

Add code
Jan 22, 2021
Figure 1 for The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT
Figure 2 for The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT
Figure 3 for The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT
Figure 4 for The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT
Viaarxiv icon

Towards Interpreting BERT for Reading Comprehension Based QA

Add code
Oct 18, 2020
Figure 1 for Towards Interpreting BERT for Reading Comprehension Based QA
Figure 2 for Towards Interpreting BERT for Reading Comprehension Based QA
Figure 3 for Towards Interpreting BERT for Reading Comprehension Based QA
Figure 4 for Towards Interpreting BERT for Reading Comprehension Based QA
Viaarxiv icon

On the Importance of Local Information in Transformer Based Models

Add code
Aug 13, 2020
Figure 1 for On the Importance of Local Information in Transformer Based Models
Figure 2 for On the Importance of Local Information in Transformer Based Models
Figure 3 for On the Importance of Local Information in Transformer Based Models
Figure 4 for On the Importance of Local Information in Transformer Based Models
Viaarxiv icon

Towards Transparent and Explainable Attention Models

Add code
Apr 29, 2020
Figure 1 for Towards Transparent and Explainable Attention Models
Figure 2 for Towards Transparent and Explainable Attention Models
Figure 3 for Towards Transparent and Explainable Attention Models
Figure 4 for Towards Transparent and Explainable Attention Models
Viaarxiv icon

Let's Ask Again: Refine Network for Automatic Question Generation

Add code
Aug 31, 2019
Figure 1 for Let's Ask Again: Refine Network for Automatic Question Generation
Figure 2 for Let's Ask Again: Refine Network for Automatic Question Generation
Figure 3 for Let's Ask Again: Refine Network for Automatic Question Generation
Figure 4 for Let's Ask Again: Refine Network for Automatic Question Generation
Viaarxiv icon

Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples

Add code
Apr 04, 2019
Figure 1 for Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples
Figure 2 for Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples
Figure 3 for Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples
Figure 4 for Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples
Viaarxiv icon

ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions

Add code
Apr 04, 2019
Figure 1 for ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
Figure 2 for ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
Figure 3 for ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
Figure 4 for ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
Viaarxiv icon