Picture for Joe Stacey

Joe Stacey

LUCID: LLM-Generated Utterances for Complex and Interesting Dialogues

Add code
Mar 01, 2024
Viaarxiv icon

Logical Reasoning for Natural Language Inference Using Generated Facts as Atoms

Add code
May 22, 2023
Viaarxiv icon

Improving Robustness in Knowledge Distillation Using Domain-Targeted Data Augmentation

Add code
May 22, 2023
Viaarxiv icon

Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models

Add code
May 23, 2022
Figure 1 for Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models
Figure 2 for Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models
Figure 3 for Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models
Figure 4 for Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models
Viaarxiv icon

Natural Language Inference with a Human Touch: Using Human Explanations to Guide Model Attention

Add code
Apr 16, 2021
Figure 1 for Natural Language Inference with a Human Touch: Using Human Explanations to Guide Model Attention
Figure 2 for Natural Language Inference with a Human Touch: Using Human Explanations to Guide Model Attention
Figure 3 for Natural Language Inference with a Human Touch: Using Human Explanations to Guide Model Attention
Figure 4 for Natural Language Inference with a Human Touch: Using Human Explanations to Guide Model Attention
Viaarxiv icon

There is Strength in Numbers: Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training

Add code
Apr 27, 2020
Figure 1 for There is Strength in Numbers: Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training
Figure 2 for There is Strength in Numbers: Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training
Figure 3 for There is Strength in Numbers: Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training
Figure 4 for There is Strength in Numbers: Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training
Viaarxiv icon