Picture for Pingzhi Li

Pingzhi Li

Glider: Global and Local Instruction-Driven Expert Router

Add code
Oct 09, 2024
Viaarxiv icon

PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches

Add code
Oct 08, 2024
Viaarxiv icon

Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild

Add code
Oct 07, 2024
Figure 1 for Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
Figure 2 for Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
Figure 3 for Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
Figure 4 for Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
Viaarxiv icon

Enhancing Quantum Security over Federated Learning via Post-Quantum Cryptography

Add code
Sep 06, 2024
Viaarxiv icon

Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark

Add code
Jun 12, 2024
Viaarxiv icon

Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent

Add code
Apr 30, 2024
Viaarxiv icon

Privacy-preserving Fine-tuning of Large Language Models through Flatness

Add code
Mar 07, 2024
Viaarxiv icon

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Add code
Feb 26, 2024
Figure 1 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 2 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 3 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 4 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Viaarxiv icon

Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy

Add code
Oct 02, 2023
Viaarxiv icon