TransfoRNN: Capturing the Sequential Information in Self-Attention Representations for Language Modeling

Add code
Apr 04, 2021
Figure 1 for TransfoRNN: Capturing the Sequential Information in Self-Attention Representations for Language Modeling
Figure 2 for TransfoRNN: Capturing the Sequential Information in Self-Attention Representations for Language Modeling
Figure 3 for TransfoRNN: Capturing the Sequential Information in Self-Attention Representations for Language Modeling
Figure 4 for TransfoRNN: Capturing the Sequential Information in Self-Attention Representations for Language Modeling

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: