What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding

Add code
Jun 04, 2024
Figure 1 for What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
Figure 2 for What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
Figure 3 for What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
Figure 4 for What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: