Linear attention is (maybe) all you need (to understand transformer optimization)

Add code
Oct 02, 2023

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: