Picture for Yuhang Zhou

Yuhang Zhou

Task-Aware Harmony Multi-Task Decision Transformer for Offline Reinforcement Learning

Add code
Nov 02, 2024
Viaarxiv icon

Revisiting SLO and Goodput Metrics in LLM Serving

Add code
Oct 18, 2024
Viaarxiv icon

LoRKD: Low-Rank Knowledge Decomposition for Medical Foundation Models

Add code
Sep 29, 2024
Viaarxiv icon

Reprogramming Distillation for Medical Foundation Models

Add code
Jul 09, 2024
Viaarxiv icon

Multimodal Graph Benchmark

Add code
Jun 24, 2024
Figure 1 for Multimodal Graph Benchmark
Figure 2 for Multimodal Graph Benchmark
Figure 3 for Multimodal Graph Benchmark
Figure 4 for Multimodal Graph Benchmark
Viaarxiv icon

Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation

Add code
Jun 19, 2024
Figure 1 for Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation
Figure 2 for Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation
Figure 3 for Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation
Figure 4 for Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation
Viaarxiv icon

Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters

Add code
Jun 14, 2024
Viaarxiv icon

Teaching-Assistant-in-the-Loop: Improving Knowledge Distillation from Imperfect Teacher Models in Low-Budget Scenarios

Add code
Jun 08, 2024
Viaarxiv icon

Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement

Add code
May 29, 2024
Viaarxiv icon

Low-Rank Knowledge Decomposition for Medical Foundation Models

Add code
Apr 26, 2024
Viaarxiv icon