Picture for Shiyu Chang

Shiyu Chang

Learning from Online Videos at Inference Time for Computer-Use Agents

Add code
Nov 06, 2025
Figure 1 for Learning from Online Videos at Inference Time for Computer-Use Agents
Figure 2 for Learning from Online Videos at Inference Time for Computer-Use Agents
Figure 3 for Learning from Online Videos at Inference Time for Computer-Use Agents
Figure 4 for Learning from Online Videos at Inference Time for Computer-Use Agents
Viaarxiv icon

Rethinking the Text-Vision Reasoning Imbalance in MLLMs through the Lens of Training Recipes

Add code
Oct 26, 2025
Viaarxiv icon

A Hierarchical Probabilistic Framework for Incremental Knowledge Tracing in Classroom Settings

Add code
Jun 11, 2025
Viaarxiv icon

Collision- and Reachability-Aware Multi-Robot Control with Grounded LLM Planners

Add code
May 26, 2025
Viaarxiv icon

Defending LLM Watermarking Against Spoofing Attacks with Contrastive Representation Learning

Add code
Apr 10, 2025
Viaarxiv icon

ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning

Add code
Apr 02, 2025
Figure 1 for ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning
Figure 2 for ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning
Figure 3 for ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning
Figure 4 for ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning
Viaarxiv icon

KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse

Add code
Feb 21, 2025
Figure 1 for KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
Figure 2 for KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
Figure 3 for KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
Figure 4 for KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
Viaarxiv icon

Instruction-Following Pruning for Large Language Models

Add code
Jan 07, 2025
Figure 1 for Instruction-Following Pruning for Large Language Models
Figure 2 for Instruction-Following Pruning for Large Language Models
Figure 3 for Instruction-Following Pruning for Large Language Models
Figure 4 for Instruction-Following Pruning for Large Language Models
Viaarxiv icon

Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning

Add code
Oct 25, 2024
Viaarxiv icon

Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective

Add code
Jul 24, 2024
Figure 1 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Figure 2 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Figure 3 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Figure 4 for Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
Viaarxiv icon