Picture for Leyang Cui

Leyang Cui

Lost in Literalism: How Supervised Training Shapes Translationese in LLMs

Add code
Mar 06, 2025
Figure 1 for Lost in Literalism: How Supervised Training Shapes Translationese in LLMs
Figure 2 for Lost in Literalism: How Supervised Training Shapes Translationese in LLMs
Figure 3 for Lost in Literalism: How Supervised Training Shapes Translationese in LLMs
Figure 4 for Lost in Literalism: How Supervised Training Shapes Translationese in LLMs
Viaarxiv icon

ThinkBench: Dynamic Out-of-Distribution Evaluation for Robust LLM Reasoning

Add code
Feb 22, 2025
Viaarxiv icon

Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability

Add code
Oct 15, 2024
Figure 1 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Figure 2 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Figure 3 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Figure 4 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Viaarxiv icon

Gated Slot Attention for Efficient Linear-Time Sequence Modeling

Add code
Sep 11, 2024
Figure 1 for Gated Slot Attention for Efficient Linear-Time Sequence Modeling
Figure 2 for Gated Slot Attention for Efficient Linear-Time Sequence Modeling
Figure 3 for Gated Slot Attention for Efficient Linear-Time Sequence Modeling
Figure 4 for Gated Slot Attention for Efficient Linear-Time Sequence Modeling
Viaarxiv icon

Not All Preference Pairs Are Created Equal: A Recipe for Annotation-Efficient Iterative Preference Learning

Add code
Jun 25, 2024
Figure 1 for Not All Preference Pairs Are Created Equal: A Recipe for Annotation-Efficient Iterative Preference Learning
Figure 2 for Not All Preference Pairs Are Created Equal: A Recipe for Annotation-Efficient Iterative Preference Learning
Figure 3 for Not All Preference Pairs Are Created Equal: A Recipe for Annotation-Efficient Iterative Preference Learning
Figure 4 for Not All Preference Pairs Are Created Equal: A Recipe for Annotation-Efficient Iterative Preference Learning
Viaarxiv icon

On the Transformations across Reward Model, Parameter Update, and In-Context Prompt

Add code
Jun 24, 2024
Viaarxiv icon

Spotting AI's Touch: Identifying LLM-Paraphrased Spans in Text

Add code
May 21, 2024
Viaarxiv icon

Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal

Add code
Mar 02, 2024
Figure 1 for Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal
Figure 2 for Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal
Figure 3 for Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal
Figure 4 for Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal
Viaarxiv icon

Retrieval is Accurate Generation

Add code
Feb 29, 2024
Figure 1 for Retrieval is Accurate Generation
Figure 2 for Retrieval is Accurate Generation
Figure 3 for Retrieval is Accurate Generation
Figure 4 for Retrieval is Accurate Generation
Viaarxiv icon

GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers

Add code
Feb 29, 2024
Figure 1 for GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers
Figure 2 for GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers
Figure 3 for GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers
Figure 4 for GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers
Viaarxiv icon