Picture for Yanfeng Jiang

Yanfeng Jiang

DeltaDQ: Ultra-High Delta Compression for Fine-Tuned LLMs via Group-wise Dropout and Separate Quantization

Add code
Oct 11, 2024
Figure 1 for DeltaDQ: Ultra-High Delta Compression for Fine-Tuned LLMs via Group-wise Dropout and Separate Quantization
Figure 2 for DeltaDQ: Ultra-High Delta Compression for Fine-Tuned LLMs via Group-wise Dropout and Separate Quantization
Figure 3 for DeltaDQ: Ultra-High Delta Compression for Fine-Tuned LLMs via Group-wise Dropout and Separate Quantization
Figure 4 for DeltaDQ: Ultra-High Delta Compression for Fine-Tuned LLMs via Group-wise Dropout and Separate Quantization
Viaarxiv icon

ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization for Vision Transformers

Add code
Jul 03, 2024
Figure 1 for ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization for Vision Transformers
Figure 2 for ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization for Vision Transformers
Figure 3 for ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization for Vision Transformers
Figure 4 for ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization for Vision Transformers
Viaarxiv icon

Exploring Post-Training Quantization of Protein Language Models

Add code
Oct 30, 2023
Figure 1 for Exploring Post-Training Quantization of Protein Language Models
Figure 2 for Exploring Post-Training Quantization of Protein Language Models
Figure 3 for Exploring Post-Training Quantization of Protein Language Models
Figure 4 for Exploring Post-Training Quantization of Protein Language Models
Viaarxiv icon