Picture for Shiyu Chang

Shiyu Chang

Instruction-Following Pruning for Large Language Models

Add code
Jan 07, 2025
Viaarxiv icon

Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning

Add code
Oct 25, 2024
Viaarxiv icon

Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective

Add code
Jul 24, 2024
Figure 1 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Figure 2 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Figure 3 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Figure 4 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Viaarxiv icon

VSP: Assessing the dual challenges of perception and reasoning in spatial planning tasks for VLMs

Add code
Jul 02, 2024
Viaarxiv icon

Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference

Add code
Jun 12, 2024
Viaarxiv icon

A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation

Add code
Jun 11, 2024
Viaarxiv icon

Advancing the Robustness of Large Language Models through Self-Denoised Smoothing

Add code
Apr 18, 2024
Viaarxiv icon

A Survey on Data Selection for Language Models

Add code
Mar 08, 2024
Viaarxiv icon

Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing

Add code
Feb 28, 2024
Figure 1 for Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing
Figure 2 for Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing
Figure 3 for Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing
Figure 4 for Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing
Viaarxiv icon

Augment before You Try: Knowledge-Enhanced Table Question Answering via Table Expansion

Add code
Jan 28, 2024
Viaarxiv icon