Gated Linear Attention Transformers with Hardware-Efficient Training

Add code
Dec 24, 2023
Figure 1 for Gated Linear Attention Transformers with Hardware-Efficient Training
Figure 2 for Gated Linear Attention Transformers with Hardware-Efficient Training
Figure 3 for Gated Linear Attention Transformers with Hardware-Efficient Training
Figure 4 for Gated Linear Attention Transformers with Hardware-Efficient Training

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: