Picture for Hongpeng Jin

Hongpeng Jin

CE-CoLLM: Efficient and Adaptive Large Language Models Through Cloud-Edge Collaboration

Add code
Nov 05, 2024
Viaarxiv icon

Boosting Deep Ensembles with Learning Rate Tuning

Add code
Oct 10, 2024
Viaarxiv icon

DA-MoE: Towards Dynamic Expert Allocation for Mixture-of-Experts Models

Add code
Sep 10, 2024
Figure 1 for DA-MoE: Towards Dynamic Expert Allocation for Mixture-of-Experts Models
Figure 2 for DA-MoE: Towards Dynamic Expert Allocation for Mixture-of-Experts Models
Figure 3 for DA-MoE: Towards Dynamic Expert Allocation for Mixture-of-Experts Models
Figure 4 for DA-MoE: Towards Dynamic Expert Allocation for Mixture-of-Experts Models
Viaarxiv icon

Rethinking Learning Rate Tuning in the Era of Large Language Models

Add code
Sep 16, 2023
Viaarxiv icon