Fovea Transformer: Efficient Long-Context Modeling with Structured Fine-to-Coarse Attention

Add code
Nov 13, 2023
Figure 1 for Fovea Transformer: Efficient Long-Context Modeling with Structured Fine-to-Coarse Attention
Figure 2 for Fovea Transformer: Efficient Long-Context Modeling with Structured Fine-to-Coarse Attention
Figure 3 for Fovea Transformer: Efficient Long-Context Modeling with Structured Fine-to-Coarse Attention
Figure 4 for Fovea Transformer: Efficient Long-Context Modeling with Structured Fine-to-Coarse Attention

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: