Picture for Jianhao Zhang

Jianhao Zhang

Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models

Add code
Jun 03, 2024
Figure 1 for Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
Figure 2 for Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
Figure 3 for Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
Figure 4 for Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
Viaarxiv icon

Coop: Memory is not a Commodity

Add code
Nov 01, 2023
Viaarxiv icon

Skywork: A More Open Bilingual Foundation Model

Add code
Oct 30, 2023
Figure 1 for Skywork: A More Open Bilingual Foundation Model
Figure 2 for Skywork: A More Open Bilingual Foundation Model
Figure 3 for Skywork: A More Open Bilingual Foundation Model
Figure 4 for Skywork: A More Open Bilingual Foundation Model
Viaarxiv icon

daBNN: A Super Fast Inference Framework for Binary Neural Networks on ARM devices

Add code
Aug 16, 2019
Figure 1 for daBNN: A Super Fast Inference Framework for Binary Neural Networks on ARM devices
Figure 2 for daBNN: A Super Fast Inference Framework for Binary Neural Networks on ARM devices
Figure 3 for daBNN: A Super Fast Inference Framework for Binary Neural Networks on ARM devices
Figure 4 for daBNN: A Super Fast Inference Framework for Binary Neural Networks on ARM devices
Viaarxiv icon