Picture for Tao Huang

Tao Huang

Learning Humanoid Standing-up Control across Diverse Postures

Add code
Feb 12, 2025
Viaarxiv icon

Real-Time Privacy Risk Measurement with Privacy Tokens for Gradient Leakage

Add code
Feb 07, 2025
Figure 1 for Real-Time Privacy Risk Measurement with Privacy Tokens for Gradient Leakage
Figure 2 for Real-Time Privacy Risk Measurement with Privacy Tokens for Gradient Leakage
Figure 3 for Real-Time Privacy Risk Measurement with Privacy Tokens for Gradient Leakage
Figure 4 for Real-Time Privacy Risk Measurement with Privacy Tokens for Gradient Leakage
Viaarxiv icon

Privacy Token: Surprised to Find Out What You Accidentally Revealed

Add code
Feb 06, 2025
Figure 1 for Privacy Token: Surprised to Find Out What You Accidentally Revealed
Figure 2 for Privacy Token: Surprised to Find Out What You Accidentally Revealed
Figure 3 for Privacy Token: Surprised to Find Out What You Accidentally Revealed
Figure 4 for Privacy Token: Surprised to Find Out What You Accidentally Revealed
Viaarxiv icon

VLA-Cache: Towards Efficient Vision-Language-Action Model via Adaptive Token Caching in Robotic Manipulation

Add code
Feb 04, 2025
Viaarxiv icon

Median of Forests for Robust Density Estimation

Add code
Jan 25, 2025
Viaarxiv icon

MiniMax-01: Scaling Foundation Models with Lightning Attention

Add code
Jan 14, 2025
Viaarxiv icon

MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders

Add code
Jan 03, 2025
Figure 1 for MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders
Figure 2 for MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders
Figure 3 for MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders
Figure 4 for MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders
Viaarxiv icon

Cross-Self KV Cache Pruning for Efficient Vision-Language Inference

Add code
Dec 05, 2024
Figure 1 for Cross-Self KV Cache Pruning for Efficient Vision-Language Inference
Figure 2 for Cross-Self KV Cache Pruning for Efficient Vision-Language Inference
Figure 3 for Cross-Self KV Cache Pruning for Efficient Vision-Language Inference
Figure 4 for Cross-Self KV Cache Pruning for Efficient Vision-Language Inference
Viaarxiv icon

Intermediate Outputs Are More Sensitive Than You Think

Add code
Dec 01, 2024
Viaarxiv icon

Learning Humanoid Locomotion with Perceptive Internal Model

Add code
Nov 21, 2024
Viaarxiv icon