Picture for Leonardo Ranaldi

Leonardo Ranaldi

Eliciting Critical Reasoning in Retrieval-Augmented Language Models via Contrastive Explanations

Add code
Oct 30, 2024
Viaarxiv icon

Animate, or Inanimate, That is the Question for Large Language Models

Add code
Aug 12, 2024
Viaarxiv icon

Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models

Add code
May 01, 2024
Figure 1 for Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
Figure 2 for Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
Figure 3 for Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
Figure 4 for Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
Viaarxiv icon

Investigating the Impact of Data Contamination of Large Language Models in Text-to-SQL Translation

Add code
Feb 12, 2024
Viaarxiv icon

When Large Language Models contradict humans? Large Language Models' Sycophantic Behaviour

Add code
Nov 15, 2023
Viaarxiv icon

Empowering Multi-step Reasoning across Languages via Tree-of-Thoughts

Add code
Nov 14, 2023
Viaarxiv icon

HANS, are you clever? Clever Hans Effect Analysis of Neural Systems

Add code
Sep 21, 2023
Viaarxiv icon

Empowering Cross-lingual Abilities of Instruction-tuned Large Language Models by Translation-following demonstrations

Add code
Aug 27, 2023
Viaarxiv icon

A Trip Towards Fairness: Bias and De-Biasing in Large Language Models

Add code
May 23, 2023
Viaarxiv icon

PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models

Add code
May 09, 2023
Figure 1 for PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models
Figure 2 for PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models
Figure 3 for PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models
Viaarxiv icon