Picture for Kyle Mahowald

Kyle Mahowald

Models Can and Should Embrace the Communicative Nature of Human-Generated Math

Add code
Sep 25, 2024
Viaarxiv icon

To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning

Add code
Sep 18, 2024
Figure 1 for To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Figure 2 for To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Figure 3 for To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Figure 4 for To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Viaarxiv icon

Do they mean 'us'? Interpreting Referring Expressions in Intergroup Bias

Add code
Jun 25, 2024
Viaarxiv icon

Participle-Prepended Nominals Have Lower Entropy Than Nominals Appended After the Participle

Add code
May 16, 2024
Viaarxiv icon

Language Models Learn Rare Phenomena from Less Rare Phenomena: The Case of the Missing AANNs

Add code
Mar 28, 2024
Viaarxiv icon

Language models align with human judgments on key grammatical constructions

Add code
Jan 19, 2024
Viaarxiv icon

Mission: Impossible Language Models

Add code
Jan 12, 2024
Viaarxiv icon

Experimental Contexts Can Facilitate Robust Semantic Property Inference in Language Models, but Inconsistently

Add code
Jan 12, 2024
Viaarxiv icon

Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs

Add code
Jan 10, 2024
Viaarxiv icon

Revisiting the Optimality of Word Lengths

Add code
Dec 06, 2023
Viaarxiv icon