ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language Model Born from Transformer

Add code
Jan 26, 2025

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: