Picture for Tianzhu Ye

Tianzhu Ye

Universal YOCO for Efficient Depth Scaling

Add code
Apr 01, 2026
Viaarxiv icon

Online Experiential Learning for Language Models

Add code
Mar 17, 2026
Viaarxiv icon

On-Policy Context Distillation for Language Models

Add code
Feb 12, 2026
Viaarxiv icon

Step by Step Network

Add code
Nov 18, 2025
Viaarxiv icon

Black-Box On-Policy Distillation of Large Language Models

Add code
Nov 13, 2025
Viaarxiv icon

SeerAttention-R: Sparse Attention Adaptation for Long Reasoning

Add code
Jun 10, 2025
Figure 1 for SeerAttention-R: Sparse Attention Adaptation for Long Reasoning
Figure 2 for SeerAttention-R: Sparse Attention Adaptation for Long Reasoning
Figure 3 for SeerAttention-R: Sparse Attention Adaptation for Long Reasoning
Figure 4 for SeerAttention-R: Sparse Attention Adaptation for Long Reasoning
Viaarxiv icon

Reinforcement Pre-Training

Add code
Jun 09, 2025
Figure 1 for Reinforcement Pre-Training
Figure 2 for Reinforcement Pre-Training
Figure 3 for Reinforcement Pre-Training
Figure 4 for Reinforcement Pre-Training
Viaarxiv icon

Rectified Sparse Attention

Add code
Jun 05, 2025
Viaarxiv icon

Differential Transformer

Add code
Oct 07, 2024
Figure 1 for Differential Transformer
Figure 2 for Differential Transformer
Figure 3 for Differential Transformer
Figure 4 for Differential Transformer
Viaarxiv icon

Agent Attention: On the Integration of Softmax and Linear Attention

Add code
Dec 22, 2023
Figure 1 for Agent Attention: On the Integration of Softmax and Linear Attention
Figure 2 for Agent Attention: On the Integration of Softmax and Linear Attention
Figure 3 for Agent Attention: On the Integration of Softmax and Linear Attention
Figure 4 for Agent Attention: On the Integration of Softmax and Linear Attention
Viaarxiv icon