Picture for Lifeng Jin

Lifeng Jin

Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing

Add code
Apr 18, 2024
Figure 1 for Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Figure 2 for Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Figure 3 for Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Figure 4 for Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Viaarxiv icon

Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models

Add code
Apr 14, 2024
Figure 1 for Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models
Figure 2 for Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models
Figure 3 for Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models
Figure 4 for Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models
Viaarxiv icon

Self-Consistency Boosts Calibration for Math Reasoning

Add code
Mar 14, 2024
Viaarxiv icon

A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation

Add code
Mar 06, 2024
Viaarxiv icon

Collaborative decoding of critical tokens for boosting factuality of large language models

Add code
Feb 28, 2024
Viaarxiv icon

Fine-Grained Self-Endorsement Improves Factuality and Reasoning

Add code
Feb 23, 2024
Viaarxiv icon

Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation

Add code
Feb 14, 2024
Viaarxiv icon

Inconsistent dialogue responses and how to recover from them

Add code
Jan 18, 2024
Viaarxiv icon

TencentLLMEval: A Hierarchical Evaluation of Real-World Capabilities for Human-Aligned LLMs

Add code
Nov 09, 2023
Viaarxiv icon

The Trickle-down Impact of Reward consistency on RLHF

Add code
Sep 28, 2023
Viaarxiv icon