SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs

Add code
May 27, 2024
Figure 1 for SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs
Figure 2 for SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs
Figure 3 for SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs
Figure 4 for SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: