Exploiting Student Parallelism for Low-latency GPU Inference of BERT-like Models in Online Services

Add code
Aug 22, 2024
Figure 1 for Exploiting Student Parallelism for Low-latency GPU Inference of BERT-like Models in Online Services
Figure 2 for Exploiting Student Parallelism for Low-latency GPU Inference of BERT-like Models in Online Services
Figure 3 for Exploiting Student Parallelism for Low-latency GPU Inference of BERT-like Models in Online Services
Figure 4 for Exploiting Student Parallelism for Low-latency GPU Inference of BERT-like Models in Online Services

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: