Picture for Qihuang Zhong

Qihuang Zhong

Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL

Add code
Oct 15, 2024
Figure 1 for Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL
Figure 2 for Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL
Figure 3 for Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL
Figure 4 for Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL
Viaarxiv icon

Iterative Data Augmentation with Large Language Models for Aspect-based Sentiment Analysis

Add code
Jun 29, 2024
Figure 1 for Iterative Data Augmentation with Large Language Models for Aspect-based Sentiment Analysis
Figure 2 for Iterative Data Augmentation with Large Language Models for Aspect-based Sentiment Analysis
Figure 3 for Iterative Data Augmentation with Large Language Models for Aspect-based Sentiment Analysis
Figure 4 for Iterative Data Augmentation with Large Language Models for Aspect-based Sentiment Analysis
Viaarxiv icon

Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Reasoners

Add code
Apr 28, 2024
Figure 1 for Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Reasoners
Figure 2 for Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Reasoners
Figure 3 for Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Reasoners
Figure 4 for Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Reasoners
Viaarxiv icon

ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding

Add code
Feb 19, 2024
Viaarxiv icon

Revisiting Knowledge Distillation for Autoregressive Language Models

Add code
Feb 19, 2024
Figure 1 for Revisiting Knowledge Distillation for Autoregressive Language Models
Figure 2 for Revisiting Knowledge Distillation for Autoregressive Language Models
Figure 3 for Revisiting Knowledge Distillation for Autoregressive Language Models
Figure 4 for Revisiting Knowledge Distillation for Autoregressive Language Models
Viaarxiv icon

Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models

Add code
Oct 20, 2023
Figure 1 for Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models
Figure 2 for Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models
Figure 3 for Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models
Figure 4 for Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models
Viaarxiv icon

Revisiting Token Dropping Strategy in Efficient BERT Pretraining

Add code
May 24, 2023
Viaarxiv icon

Self-Evolution Learning for Discriminative Language Model Pretraining

Add code
May 24, 2023
Viaarxiv icon

Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks

Add code
May 22, 2023
Figure 1 for Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks
Figure 2 for Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks
Figure 3 for Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks
Figure 4 for Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks
Viaarxiv icon

Towards Making the Most of ChatGPT for Machine Translation

Add code
Mar 24, 2023
Viaarxiv icon