Picture for Jeff Pool

Jeff Pool

MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models

Add code
Sep 26, 2024
Figure 1 for MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
Figure 2 for MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
Figure 3 for MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
Figure 4 for MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
Viaarxiv icon

Accelerating Sparse Deep Neural Networks

Add code
Apr 16, 2021
Figure 1 for Accelerating Sparse Deep Neural Networks
Figure 2 for Accelerating Sparse Deep Neural Networks
Figure 3 for Accelerating Sparse Deep Neural Networks
Figure 4 for Accelerating Sparse Deep Neural Networks
Viaarxiv icon

Self-Supervised GAN Compression

Add code
Jul 12, 2020
Figure 1 for Self-Supervised GAN Compression
Figure 2 for Self-Supervised GAN Compression
Figure 3 for Self-Supervised GAN Compression
Figure 4 for Self-Supervised GAN Compression
Viaarxiv icon

Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training

Add code
Jun 01, 2018
Figure 1 for Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training
Figure 2 for Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training
Figure 3 for Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training
Figure 4 for Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training
Viaarxiv icon

Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip

Add code
Apr 26, 2018
Figure 1 for Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Figure 2 for Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Figure 3 for Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Figure 4 for Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Viaarxiv icon

Efficient Sparse-Winograd Convolutional Neural Networks

Add code
Feb 18, 2018
Figure 1 for Efficient Sparse-Winograd Convolutional Neural Networks
Figure 2 for Efficient Sparse-Winograd Convolutional Neural Networks
Figure 3 for Efficient Sparse-Winograd Convolutional Neural Networks
Figure 4 for Efficient Sparse-Winograd Convolutional Neural Networks
Viaarxiv icon

Exploring the Regularity of Sparse Structure in Convolutional Neural Networks

Add code
Jun 05, 2017
Figure 1 for Exploring the Regularity of Sparse Structure in Convolutional Neural Networks
Figure 2 for Exploring the Regularity of Sparse Structure in Convolutional Neural Networks
Figure 3 for Exploring the Regularity of Sparse Structure in Convolutional Neural Networks
Figure 4 for Exploring the Regularity of Sparse Structure in Convolutional Neural Networks
Viaarxiv icon

Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks

Add code
May 03, 2017
Figure 1 for Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks
Figure 2 for Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks
Figure 3 for Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks
Figure 4 for Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks
Viaarxiv icon

DSD: Dense-Sparse-Dense Training for Deep Neural Networks

Add code
Feb 21, 2017
Figure 1 for DSD: Dense-Sparse-Dense Training for Deep Neural Networks
Figure 2 for DSD: Dense-Sparse-Dense Training for Deep Neural Networks
Figure 3 for DSD: Dense-Sparse-Dense Training for Deep Neural Networks
Figure 4 for DSD: Dense-Sparse-Dense Training for Deep Neural Networks
Viaarxiv icon

Learning both Weights and Connections for Efficient Neural Networks

Add code
Oct 30, 2015
Figure 1 for Learning both Weights and Connections for Efficient Neural Networks
Figure 2 for Learning both Weights and Connections for Efficient Neural Networks
Figure 3 for Learning both Weights and Connections for Efficient Neural Networks
Figure 4 for Learning both Weights and Connections for Efficient Neural Networks
Viaarxiv icon