Picture for Liang Ding

Liang Ding

Leveraging Metamemory Mechanisms for Enhanced Data-Free Code Generation in LLMs

Add code
Jan 14, 2025
Viaarxiv icon

Self-Evolution Knowledge Distillation for LLM-based Machine Translation

Add code
Dec 19, 2024
Viaarxiv icon

DynamicKV: Task-Aware Adaptive KV Cache Compression for Long Context LLMs

Add code
Dec 19, 2024
Viaarxiv icon

CogSteer: Cognition-Inspired Selective Layer Intervention for Efficient Semantic Steering in Large Language Models

Add code
Oct 23, 2024
Figure 1 for CogSteer: Cognition-Inspired Selective Layer Intervention for Efficient Semantic Steering in Large Language Models
Figure 2 for CogSteer: Cognition-Inspired Selective Layer Intervention for Efficient Semantic Steering in Large Language Models
Figure 3 for CogSteer: Cognition-Inspired Selective Layer Intervention for Efficient Semantic Steering in Large Language Models
Figure 4 for CogSteer: Cognition-Inspired Selective Layer Intervention for Efficient Semantic Steering in Large Language Models
Viaarxiv icon

Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL

Add code
Oct 15, 2024
Figure 1 for Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL
Figure 2 for Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL
Figure 3 for Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL
Figure 4 for Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL
Viaarxiv icon

Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for Fine-Tuning Large Language Models

Add code
Oct 13, 2024
Figure 1 for Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for Fine-Tuning Large Language Models
Figure 2 for Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for Fine-Tuning Large Language Models
Figure 3 for Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for Fine-Tuning Large Language Models
Figure 4 for Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for Fine-Tuning Large Language Models
Viaarxiv icon

Self-Powered LLM Modality Expansion for Large Speech-Text Models

Add code
Oct 04, 2024
Figure 1 for Self-Powered LLM Modality Expansion for Large Speech-Text Models
Figure 2 for Self-Powered LLM Modality Expansion for Large Speech-Text Models
Figure 3 for Self-Powered LLM Modality Expansion for Large Speech-Text Models
Figure 4 for Self-Powered LLM Modality Expansion for Large Speech-Text Models
Viaarxiv icon

MQM-APE: Toward High-Quality Error Annotation Predictors with Automatic Post-Editing in LLM Translation Evaluators

Add code
Sep 22, 2024
Viaarxiv icon

$\mathbb{USCD}$: Improving Code Generation of LLMs by Uncertainty-Aware Selective Contrastive Decoding

Add code
Sep 09, 2024
Viaarxiv icon

Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Large Language Models

Add code
Aug 28, 2024
Viaarxiv icon