Picture for Feiwen Zhu

Feiwen Zhu

ScaleFold: Reducing AlphaFold Initial Training Time to 10 Hours

Add code
Apr 17, 2024
Viaarxiv icon

Boosting the Convergence of Reinforcement Learning-based Auto-pruning Using Historical Data

Add code
Jul 16, 2021
Figure 1 for Boosting the Convergence of Reinforcement Learning-based Auto-pruning Using Historical Data
Figure 2 for Boosting the Convergence of Reinforcement Learning-based Auto-pruning Using Historical Data
Figure 3 for Boosting the Convergence of Reinforcement Learning-based Auto-pruning Using Historical Data
Figure 4 for Boosting the Convergence of Reinforcement Learning-based Auto-pruning Using Historical Data
Viaarxiv icon

FusionStitching: Boosting Memory Intensive Computations for Deep Learning Workloads

Add code
Sep 23, 2020
Figure 1 for FusionStitching: Boosting Memory Intensive Computations for Deep Learning Workloads
Figure 2 for FusionStitching: Boosting Memory Intensive Computations for Deep Learning Workloads
Figure 3 for FusionStitching: Boosting Memory Intensive Computations for Deep Learning Workloads
Figure 4 for FusionStitching: Boosting Memory Intensive Computations for Deep Learning Workloads
Viaarxiv icon

Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip

Add code
Apr 26, 2018
Figure 1 for Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Figure 2 for Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Figure 3 for Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Figure 4 for Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Viaarxiv icon