FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization

Add code
Feb 28, 2024
Figure 1 for FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization
Figure 2 for FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization
Figure 3 for FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization
Figure 4 for FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: