Mixture-of-Depths: Dynamically allocating compute in transformer-based language models

Add code
Apr 02, 2024
Figure 1 for Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
Figure 2 for Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
Figure 3 for Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
Figure 4 for Mixture-of-Depths: Dynamically allocating compute in transformer-based language models

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: