Picture for Joseph Hassoun

Joseph Hassoun

SaiT: Sparse Vision Transformers through Adaptive Token Pruning

Add code
Oct 11, 2022
Figure 1 for SaiT: Sparse Vision Transformers through Adaptive Token Pruning
Figure 2 for SaiT: Sparse Vision Transformers through Adaptive Token Pruning
Figure 3 for SaiT: Sparse Vision Transformers through Adaptive Token Pruning
Figure 4 for SaiT: Sparse Vision Transformers through Adaptive Token Pruning
Viaarxiv icon

MaiT: Leverage Attention Masks for More Efficient Image Transformers

Add code
Jul 06, 2022
Figure 1 for MaiT: Leverage Attention Masks for More Efficient Image Transformers
Figure 2 for MaiT: Leverage Attention Masks for More Efficient Image Transformers
Figure 3 for MaiT: Leverage Attention Masks for More Efficient Image Transformers
Figure 4 for MaiT: Leverage Attention Masks for More Efficient Image Transformers
Viaarxiv icon

Learned Token Pruning for Transformers

Add code
Jul 02, 2021
Figure 1 for Learned Token Pruning for Transformers
Figure 2 for Learned Token Pruning for Transformers
Figure 3 for Learned Token Pruning for Transformers
Figure 4 for Learned Token Pruning for Transformers
Viaarxiv icon

Near-Lossless Post-Training Quantization of Deep Neural Networks via a Piecewise Linear Approximation

Add code
Jan 31, 2020
Figure 1 for Near-Lossless Post-Training Quantization of Deep Neural Networks via a Piecewise Linear Approximation
Figure 2 for Near-Lossless Post-Training Quantization of Deep Neural Networks via a Piecewise Linear Approximation
Figure 3 for Near-Lossless Post-Training Quantization of Deep Neural Networks via a Piecewise Linear Approximation
Figure 4 for Near-Lossless Post-Training Quantization of Deep Neural Networks via a Piecewise Linear Approximation
Viaarxiv icon