Picture for Wanxiang Che

Wanxiang Che

Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring

Add code
Oct 28, 2024
Viaarxiv icon

What Factors Affect Multi-Modal In-Context Learning? An In-Depth Exploration

Add code
Oct 27, 2024
Viaarxiv icon

Unlocking the Boundaries of Thought: A Reasoning Granularity Framework to Quantify and Optimize Chain-of-Thought

Add code
Oct 08, 2024
Figure 1 for Unlocking the Boundaries of Thought: A Reasoning Granularity Framework to Quantify and Optimize Chain-of-Thought
Figure 2 for Unlocking the Boundaries of Thought: A Reasoning Granularity Framework to Quantify and Optimize Chain-of-Thought
Figure 3 for Unlocking the Boundaries of Thought: A Reasoning Granularity Framework to Quantify and Optimize Chain-of-Thought
Figure 4 for Unlocking the Boundaries of Thought: A Reasoning Granularity Framework to Quantify and Optimize Chain-of-Thought
Viaarxiv icon

Lens: Rethinking Multilingual Enhancement for Large Language Models

Add code
Oct 06, 2024
Viaarxiv icon

In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks

Add code
Oct 02, 2024
Viaarxiv icon

Enabling Real-Time Conversations with Minimal Training Costs

Add code
Sep 18, 2024
Viaarxiv icon

What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices

Add code
Sep 03, 2024
Figure 1 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Figure 2 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Figure 3 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Figure 4 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Viaarxiv icon

DAC: Decomposed Automation Correction for Text-to-SQL

Add code
Aug 16, 2024
Viaarxiv icon

Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

Add code
Aug 16, 2024
Figure 1 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 2 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 3 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 4 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Viaarxiv icon

FLEXTAF: Enhancing Table Reasoning with Flexible Tabular Formats

Add code
Aug 16, 2024
Viaarxiv icon