Picture for Xucheng Ye

Xucheng Ye

Towards Comprehensive Preference Data Collection for Reward Modeling

Add code
Jun 24, 2024
Viaarxiv icon

KwaiYiiMath: Technical Report

Add code
Oct 19, 2023
Figure 1 for KwaiYiiMath: Technical Report
Figure 2 for KwaiYiiMath: Technical Report
Figure 3 for KwaiYiiMath: Technical Report
Figure 4 for KwaiYiiMath: Technical Report
Viaarxiv icon

FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update

Add code
Aug 20, 2021
Figure 1 for FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update
Figure 2 for FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update
Figure 3 for FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update
Figure 4 for FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update
Viaarxiv icon

S2Engine: A Novel Systolic Architecture for Sparse Convolutional Neural Networks

Add code
Jun 15, 2021
Figure 1 for S2Engine: A Novel Systolic Architecture for Sparse Convolutional Neural Networks
Figure 2 for S2Engine: A Novel Systolic Architecture for Sparse Convolutional Neural Networks
Figure 3 for S2Engine: A Novel Systolic Architecture for Sparse Convolutional Neural Networks
Figure 4 for S2Engine: A Novel Systolic Architecture for Sparse Convolutional Neural Networks
Viaarxiv icon

RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models

Add code
Jun 07, 2021
Figure 1 for RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models
Figure 2 for RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models
Figure 3 for RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models
Figure 4 for RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models
Viaarxiv icon

SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training

Add code
Jul 21, 2020
Figure 1 for SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
Figure 2 for SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
Figure 3 for SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
Figure 4 for SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
Viaarxiv icon

Accelerating CNN Training by Sparsifying Activation Gradients

Add code
Aug 01, 2019
Figure 1 for Accelerating CNN Training by Sparsifying Activation Gradients
Figure 2 for Accelerating CNN Training by Sparsifying Activation Gradients
Figure 3 for Accelerating CNN Training by Sparsifying Activation Gradients
Figure 4 for Accelerating CNN Training by Sparsifying Activation Gradients
Viaarxiv icon