Picture for Xinpeng Wang

Xinpeng Wang

Algorithmic Fidelity of Large Language Models in Generating Synthetic German Public Opinions: A Case Study

Add code
Dec 17, 2024
Viaarxiv icon

Understanding When Tree of Thoughts Succeeds: Larger Models Excel in Generation, Not Discrimination

Add code
Oct 24, 2024
Viaarxiv icon

FedCCRL: Federated Domain Generalization with Cross-Client Representation Learning

Add code
Oct 15, 2024
Viaarxiv icon

DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination

Add code
Oct 06, 2024
Figure 1 for DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination
Figure 2 for DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination
Figure 3 for DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination
Figure 4 for DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination
Viaarxiv icon

Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation

Add code
Oct 04, 2024
Viaarxiv icon

"Seeing the Big through the Small": Can LLMs Approximate Human Judgment Distributions on NLI from a Few Explanations?

Add code
Jun 25, 2024
Viaarxiv icon

The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models

Add code
Jun 16, 2024
Figure 1 for The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models
Figure 2 for The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models
Figure 3 for The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models
Figure 4 for The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models
Viaarxiv icon

FinerCut: Finer-grained Interpretable Layer Pruning for Large Language Models

Add code
May 28, 2024
Viaarxiv icon

Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think

Add code
Apr 12, 2024
Figure 1 for Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
Figure 2 for Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
Figure 3 for Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
Figure 4 for Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
Viaarxiv icon

On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models

Add code
Mar 07, 2024
Figure 1 for On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Figure 2 for On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Figure 3 for On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Figure 4 for On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Viaarxiv icon