Picture for Pengcheng Dai

Pengcheng Dai

S2Engine: A Novel Systolic Architecture for Sparse Convolutional Neural Networks

Add code
Jun 15, 2021
Figure 1 for S2Engine: A Novel Systolic Architecture for Sparse Convolutional Neural Networks
Figure 2 for S2Engine: A Novel Systolic Architecture for Sparse Convolutional Neural Networks
Figure 3 for S2Engine: A Novel Systolic Architecture for Sparse Convolutional Neural Networks
Figure 4 for S2Engine: A Novel Systolic Architecture for Sparse Convolutional Neural Networks
Viaarxiv icon

Optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms

Add code
Apr 12, 2021
Figure 1 for Optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms
Figure 2 for Optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms
Figure 3 for Optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms
Figure 4 for Optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms
Viaarxiv icon

SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training

Add code
Jul 21, 2020
Figure 1 for SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
Figure 2 for SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
Figure 3 for SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
Figure 4 for SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
Viaarxiv icon

Accelerating CNN Training by Sparsifying Activation Gradients

Add code
Aug 01, 2019
Figure 1 for Accelerating CNN Training by Sparsifying Activation Gradients
Figure 2 for Accelerating CNN Training by Sparsifying Activation Gradients
Figure 3 for Accelerating CNN Training by Sparsifying Activation Gradients
Figure 4 for Accelerating CNN Training by Sparsifying Activation Gradients
Viaarxiv icon