LATTE: Low-Precision Approximate Attention with Head-wise Trainable Threshold for Efficient Transformer

Add code
Apr 11, 2024

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: