Picture for Shiyu Chang

Shiyu Chang

KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse

Add code
Feb 21, 2025
Viaarxiv icon

Instruction-Following Pruning for Large Language Models

Add code
Jan 07, 2025
Figure 1 for Instruction-Following Pruning for Large Language Models
Figure 2 for Instruction-Following Pruning for Large Language Models
Figure 3 for Instruction-Following Pruning for Large Language Models
Figure 4 for Instruction-Following Pruning for Large Language Models
Viaarxiv icon

Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning

Add code
Oct 25, 2024
Viaarxiv icon

Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective

Add code
Jul 24, 2024
Figure 1 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Figure 2 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Figure 3 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Figure 4 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Viaarxiv icon

VSP: Assessing the dual challenges of perception and reasoning in spatial planning tasks for VLMs

Add code
Jul 02, 2024
Figure 1 for VSP: Assessing the dual challenges of perception and reasoning in spatial planning tasks for VLMs
Figure 2 for VSP: Assessing the dual challenges of perception and reasoning in spatial planning tasks for VLMs
Figure 3 for VSP: Assessing the dual challenges of perception and reasoning in spatial planning tasks for VLMs
Figure 4 for VSP: Assessing the dual challenges of perception and reasoning in spatial planning tasks for VLMs
Viaarxiv icon

Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference

Add code
Jun 12, 2024
Viaarxiv icon

A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation

Add code
Jun 11, 2024
Figure 1 for A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation
Figure 2 for A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation
Figure 3 for A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation
Figure 4 for A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation
Viaarxiv icon

Advancing the Robustness of Large Language Models through Self-Denoised Smoothing

Add code
Apr 18, 2024
Figure 1 for Advancing the Robustness of Large Language Models through Self-Denoised Smoothing
Figure 2 for Advancing the Robustness of Large Language Models through Self-Denoised Smoothing
Figure 3 for Advancing the Robustness of Large Language Models through Self-Denoised Smoothing
Figure 4 for Advancing the Robustness of Large Language Models through Self-Denoised Smoothing
Viaarxiv icon

A Survey on Data Selection for Language Models

Add code
Mar 08, 2024
Viaarxiv icon

Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing

Add code
Feb 28, 2024
Figure 1 for Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing
Figure 2 for Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing
Figure 3 for Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing
Figure 4 for Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing
Viaarxiv icon