Picture for Jena D. Hwang

Jena D. Hwang

Diverging Preferences: When do Annotators Disagree and do Models Know?

Add code
Oct 18, 2024
Viaarxiv icon

Rel-A.I.: An Interaction-Centered Approach To Measuring Human-LM Reliance

Add code
Jul 10, 2024
Viaarxiv icon

Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty

Add code
Jan 12, 2024
Viaarxiv icon

SPLAIN: Augmenting CybersecurityWarnings with Reasons and Data

Add code
Nov 19, 2023
Viaarxiv icon

UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations

Add code
Nov 14, 2023
Viaarxiv icon

The Generative AI Paradox: "What It Can Create, It May Not Understand"

Add code
Oct 31, 2023
Viaarxiv icon

"You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation

Add code
Oct 26, 2023
Figure 1 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Figure 2 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Figure 3 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Figure 4 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Viaarxiv icon

Cultural and Linguistic Diversity Improves Visual Representations

Add code
Oct 22, 2023
Figure 1 for Cultural and Linguistic Diversity Improves Visual Representations
Figure 2 for Cultural and Linguistic Diversity Improves Visual Representations
Figure 3 for Cultural and Linguistic Diversity Improves Visual Representations
Figure 4 for Cultural and Linguistic Diversity Improves Visual Representations
Viaarxiv icon

COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements

Add code
Jun 09, 2023
Viaarxiv icon

Faith and Fate: Limits of Transformers on Compositionality

Add code
Jun 01, 2023
Viaarxiv icon