Picture for Zhihua Wu

Zhihua Wu

ChuXin: 1.6B Technical Report

Add code
May 08, 2024
Viaarxiv icon

Efficient LLM Inference with Kcache

Add code
Apr 28, 2024
Viaarxiv icon

Code Comparison Tuning for Code Large Language Models

Add code
Mar 28, 2024
Viaarxiv icon

RecycleGPT: An Autoregressive Language Model with Recyclable Module

Add code
Aug 08, 2023
Viaarxiv icon

TA-MoE: Topology-Aware Large Scale Mixture-of-Expert Training

Add code
Feb 20, 2023
Viaarxiv icon

HelixFold: An Efficient Implementation of AlphaFold2 using PaddlePaddle

Add code
Jul 13, 2022
Figure 1 for HelixFold: An Efficient Implementation of AlphaFold2 using PaddlePaddle
Figure 2 for HelixFold: An Efficient Implementation of AlphaFold2 using PaddlePaddle
Figure 3 for HelixFold: An Efficient Implementation of AlphaFold2 using PaddlePaddle
Figure 4 for HelixFold: An Efficient Implementation of AlphaFold2 using PaddlePaddle
Viaarxiv icon

SE-MoE: A Scalable and Efficient Mixture-of-Experts Distributed Training and Inference System

Add code
May 20, 2022
Viaarxiv icon

Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters

Add code
May 19, 2022
Figure 1 for Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
Figure 2 for Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
Figure 3 for Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
Figure 4 for Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
Viaarxiv icon

ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation

Add code
Dec 31, 2021
Figure 1 for ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation
Figure 2 for ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation
Figure 3 for ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation
Figure 4 for ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation
Viaarxiv icon

ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation

Add code
Dec 23, 2021
Figure 1 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 2 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 3 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 4 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Viaarxiv icon