Picture for Tianyu Pang

Tianyu Pang

Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models

Add code
Dec 24, 2024
Viaarxiv icon

Real-time Identity Defenses against Malicious Personalization of Diffusion Models

Add code
Dec 13, 2024
Viaarxiv icon

When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context Training

Add code
Nov 20, 2024
Figure 1 for When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context Training
Figure 2 for When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context Training
Figure 3 for When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context Training
Figure 4 for When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context Training
Viaarxiv icon

Scaling up Masked Diffusion Models on Text

Add code
Oct 24, 2024
Figure 1 for Scaling up Masked Diffusion Models on Text
Figure 2 for Scaling up Masked Diffusion Models on Text
Figure 3 for Scaling up Masked Diffusion Models on Text
Figure 4 for Scaling up Masked Diffusion Models on Text
Viaarxiv icon

SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction

Add code
Oct 17, 2024
Figure 1 for SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction
Figure 2 for SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction
Figure 3 for SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction
Figure 4 for SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction
Viaarxiv icon

Model Balancing Helps Low-data Training and Fine-tuning

Add code
Oct 16, 2024
Viaarxiv icon

Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts

Add code
Oct 16, 2024
Figure 1 for Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts
Figure 2 for Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts
Figure 3 for Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts
Figure 4 for Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts
Viaarxiv icon

Improving Long-Text Alignment for Text-to-Image Diffusion Models

Add code
Oct 15, 2024
Viaarxiv icon

When Attention Sink Emerges in Language Models: An Empirical View

Add code
Oct 14, 2024
Figure 1 for When Attention Sink Emerges in Language Models: An Empirical View
Figure 2 for When Attention Sink Emerges in Language Models: An Empirical View
Figure 3 for When Attention Sink Emerges in Language Models: An Empirical View
Figure 4 for When Attention Sink Emerges in Language Models: An Empirical View
Viaarxiv icon

Denial-of-Service Poisoning Attacks against Large Language Models

Add code
Oct 14, 2024
Figure 1 for Denial-of-Service Poisoning Attacks against Large Language Models
Figure 2 for Denial-of-Service Poisoning Attacks against Large Language Models
Figure 3 for Denial-of-Service Poisoning Attacks against Large Language Models
Figure 4 for Denial-of-Service Poisoning Attacks against Large Language Models
Viaarxiv icon