Picture for Zhangchen Xu

Zhangchen Xu

KodCode: A Diverse, Challenging, and Verifiable Synthetic Dataset for Coding

Add code
Mar 04, 2025
Viaarxiv icon

SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities

Add code
Feb 17, 2025
Viaarxiv icon

Small Models Struggle to Learn from Strong Reasoners

Add code
Feb 17, 2025
Viaarxiv icon

Stronger Models are NOT Stronger Teachers for Instruction Tuning

Add code
Nov 12, 2024
Figure 1 for Stronger Models are NOT Stronger Teachers for Instruction Tuning
Figure 2 for Stronger Models are NOT Stronger Teachers for Instruction Tuning
Figure 3 for Stronger Models are NOT Stronger Teachers for Instruction Tuning
Figure 4 for Stronger Models are NOT Stronger Teachers for Instruction Tuning
Viaarxiv icon

CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models

Add code
Jun 18, 2024
Viaarxiv icon

ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat Templates

Add code
Jun 17, 2024
Viaarxiv icon

Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing

Add code
Jun 12, 2024
Viaarxiv icon

ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning

Add code
May 31, 2024
Viaarxiv icon

SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding

Add code
Feb 24, 2024
Viaarxiv icon

ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs

Add code
Feb 22, 2024
Viaarxiv icon