Picture for Naizheng Wang

Naizheng Wang

DynMoLE: Boosting Mixture of LoRA Experts Fine-Tuning with a Hybrid Routing Mechanism

Add code
Apr 01, 2025
Viaarxiv icon

MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA based Mixture of Experts

Add code
Apr 22, 2024
Figure 1 for MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA based Mixture of Experts
Figure 2 for MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA based Mixture of Experts
Figure 3 for MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA based Mixture of Experts
Figure 4 for MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA based Mixture of Experts
Viaarxiv icon