WKVQuant: Quantizing Weight and Key/Value Cache for Large Language Models Gains More

Add code
Feb 20, 2024
Figure 1 for WKVQuant: Quantizing Weight and Key/Value Cache for Large Language Models Gains More
Figure 2 for WKVQuant: Quantizing Weight and Key/Value Cache for Large Language Models Gains More
Figure 3 for WKVQuant: Quantizing Weight and Key/Value Cache for Large Language Models Gains More
Figure 4 for WKVQuant: Quantizing Weight and Key/Value Cache for Large Language Models Gains More

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: