Picture for Yiqi Chen

Yiqi Chen

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

Add code
Nov 05, 2024
Figure 1 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 2 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 3 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 4 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Viaarxiv icon

PTQ4ViT: Post-Training Quantization Framework for Vision Transformers

Add code
Nov 24, 2021
Figure 1 for PTQ4ViT: Post-Training Quantization Framework for Vision Transformers
Figure 2 for PTQ4ViT: Post-Training Quantization Framework for Vision Transformers
Figure 3 for PTQ4ViT: Post-Training Quantization Framework for Vision Transformers
Figure 4 for PTQ4ViT: Post-Training Quantization Framework for Vision Transformers
Viaarxiv icon

PTQ-SL: Exploring the Sub-layerwise Post-training Quantization

Add code
Oct 18, 2021
Figure 1 for PTQ-SL: Exploring the Sub-layerwise Post-training Quantization
Figure 2 for PTQ-SL: Exploring the Sub-layerwise Post-training Quantization
Figure 3 for PTQ-SL: Exploring the Sub-layerwise Post-training Quantization
Figure 4 for PTQ-SL: Exploring the Sub-layerwise Post-training Quantization
Viaarxiv icon