Picture for Hai "Helen" Li

Hai "Helen" Li

SpeechPrune: Context-aware Token Pruning for Speech Information Retrieval

Add code
Dec 16, 2024
Viaarxiv icon

FedProphet: Memory-Efficient Federated Adversarial Training via Theoretic-Robustness and Low-Inconsistency Cascade Learning

Add code
Sep 12, 2024
Figure 1 for FedProphet: Memory-Efficient Federated Adversarial Training via Theoretic-Robustness and Low-Inconsistency Cascade Learning
Figure 2 for FedProphet: Memory-Efficient Federated Adversarial Training via Theoretic-Robustness and Low-Inconsistency Cascade Learning
Figure 3 for FedProphet: Memory-Efficient Federated Adversarial Training via Theoretic-Robustness and Low-Inconsistency Cascade Learning
Figure 4 for FedProphet: Memory-Efficient Federated Adversarial Training via Theoretic-Robustness and Low-Inconsistency Cascade Learning
Viaarxiv icon

SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models

Add code
Oct 29, 2023
Figure 1 for SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models
Figure 2 for SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models
Figure 3 for SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models
Figure 4 for SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models
Viaarxiv icon

Rethinking Normalization Methods in Federated Learning

Add code
Oct 07, 2022
Figure 1 for Rethinking Normalization Methods in Federated Learning
Figure 2 for Rethinking Normalization Methods in Federated Learning
Figure 3 for Rethinking Normalization Methods in Federated Learning
Figure 4 for Rethinking Normalization Methods in Federated Learning
Viaarxiv icon