Picture for Ming Tan

Ming Tan

Code Representation Learning At Scale

Add code
Feb 02, 2024
Viaarxiv icon

Improving Prompt Tuning with Learned Prompting Layers

Add code
Oct 31, 2023
Viaarxiv icon

CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion

Add code
Oct 17, 2023
Figure 1 for CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion
Figure 2 for CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion
Figure 3 for CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion
Figure 4 for CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion
Viaarxiv icon

Exploring Continual Learning for Code Generation Models

Add code
Jul 05, 2023
Viaarxiv icon

Cross-View Hierarchy Network for Stereo Image Super-Resolution

Add code
Apr 13, 2023
Viaarxiv icon

DAFD: Domain Adaptation via Feature Disentanglement for Image Classification

Add code
Jan 30, 2023
Viaarxiv icon

Multi-lingual Evaluation of Code Generation Models

Add code
Oct 26, 2022
Viaarxiv icon

ContraGen: Effective Contrastive Learning For Causal Language Model

Add code
Oct 03, 2022
Figure 1 for ContraGen: Effective Contrastive Learning For Causal Language Model
Figure 2 for ContraGen: Effective Contrastive Learning For Causal Language Model
Figure 3 for ContraGen: Effective Contrastive Learning For Causal Language Model
Figure 4 for ContraGen: Effective Contrastive Learning For Causal Language Model
Viaarxiv icon

AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results

Add code
Aug 25, 2022
Figure 1 for AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results
Figure 2 for AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results
Figure 3 for AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results
Figure 4 for AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results
Viaarxiv icon

DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization

Add code
Mar 21, 2022
Figure 1 for DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization
Figure 2 for DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization
Figure 3 for DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization
Figure 4 for DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization
Viaarxiv icon