Picture for Jonathan Herzig

Jonathan Herzig

Distinguishing Ignorance from Error in LLM Hallucinations

Add code
Oct 29, 2024
Viaarxiv icon

Can Few-shot Work in Long-Context? Recycling the Context to Generate Demonstrations

Add code
Jun 19, 2024
Viaarxiv icon

TACT: Advancing Complex Aggregative Reasoning with Information Extraction Tools

Add code
Jun 05, 2024
Viaarxiv icon

Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?

Add code
May 09, 2024
Viaarxiv icon

Constructing Benchmarks and Interventions for Combating Hallucinations in LLMs

Add code
Apr 15, 2024
Viaarxiv icon

MiMiC: Minimally Modified Counterfactuals in the Representation Space

Add code
Feb 16, 2024
Viaarxiv icon

A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains

Add code
Feb 02, 2024
Viaarxiv icon

Multilingual Instruction Tuning With Just a Pinch of Multilinguality

Add code
Jan 08, 2024
Viaarxiv icon

A Comprehensive Evaluation of Tool-Assisted Generation Strategies

Add code
Oct 16, 2023
Viaarxiv icon

Evaluating and Modeling Attribution for Cross-Lingual Question Answering

Add code
May 23, 2023
Viaarxiv icon