Picture for Qingfu Zhu

Qingfu Zhu

Can Large Language Models Understand You Better? An MBTI Personality Detection Dataset Aligned with Population Traits

Add code
Dec 17, 2024
Viaarxiv icon

SCITAT: A Question Answering Benchmark for Scientific Tables and Text Covering Diverse Reasoning Types

Add code
Dec 16, 2024
Viaarxiv icon

CRVQ: Channel-relaxed Vector Quantization for Extreme Compression of LLMs

Add code
Dec 12, 2024
Viaarxiv icon

A Static and Dynamic Attention Framework for Multi Turn Dialogue Generation

Add code
Oct 28, 2024
Viaarxiv icon

Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring

Add code
Oct 28, 2024
Figure 1 for Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring
Figure 2 for Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring
Figure 3 for Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring
Figure 4 for Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring
Viaarxiv icon

In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks

Add code
Oct 02, 2024
Figure 1 for In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks
Figure 2 for In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks
Figure 3 for In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks
Figure 4 for In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks
Viaarxiv icon

FLEXTAF: Enhancing Table Reasoning with Flexible Tabular Formats

Add code
Aug 16, 2024
Viaarxiv icon

Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

Add code
Aug 16, 2024
Figure 1 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 2 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 3 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 4 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Viaarxiv icon

DAC: Decomposed Automation Correction for Text-to-SQL

Add code
Aug 16, 2024
Viaarxiv icon

Concise and Precise Context Compression for Tool-Using Language Models

Add code
Jul 02, 2024
Viaarxiv icon