FlashDecoding++: Faster Large Language Model Inference on GPUs

Add code
Nov 10, 2023
Figure 1 for FlashDecoding++: Faster Large Language Model Inference on GPUs
Figure 2 for FlashDecoding++: Faster Large Language Model Inference on GPUs
Figure 3 for FlashDecoding++: Faster Large Language Model Inference on GPUs
Figure 4 for FlashDecoding++: Faster Large Language Model Inference on GPUs

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: