Picture for Xipeng Zhang

Xipeng Zhang

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

Add code
Nov 05, 2024
Figure 1 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 2 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 3 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 4 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Viaarxiv icon

E-Sparse: Boosting the Large Language Model Inference through Entropy-based N:M Sparsity

Add code
Oct 24, 2023
Viaarxiv icon

MKQ-BERT: Quantized BERT with 4-bits Weights and Activations

Add code
Mar 25, 2022
Figure 1 for MKQ-BERT: Quantized BERT with 4-bits Weights and Activations
Figure 2 for MKQ-BERT: Quantized BERT with 4-bits Weights and Activations
Figure 3 for MKQ-BERT: Quantized BERT with 4-bits Weights and Activations
Viaarxiv icon