Picture for Yangruibo Ding

Yangruibo Ding

SemCoder: Training Code Language Models with Comprehensive Semantics

Add code
Jun 03, 2024
Viaarxiv icon

CYCLE: Learning to Self-Refine the Code Generation

Add code
Mar 27, 2024
Viaarxiv icon

Vulnerability Detection with Code Language Models: How Far Are We?

Add code
Mar 27, 2024
Figure 1 for Vulnerability Detection with Code Language Models: How Far Are We?
Figure 2 for Vulnerability Detection with Code Language Models: How Far Are We?
Figure 3 for Vulnerability Detection with Code Language Models: How Far Are We?
Figure 4 for Vulnerability Detection with Code Language Models: How Far Are We?
Viaarxiv icon

Beyond Accuracy: Evaluating Self-Consistency of Code Large Language Models with IdentityChain

Add code
Oct 21, 2023
Viaarxiv icon

CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion

Add code
Oct 17, 2023
Figure 1 for CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion
Figure 2 for CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion
Figure 3 for CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion
Figure 4 for CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion
Viaarxiv icon

CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context

Add code
Dec 20, 2022
Figure 1 for CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context
Figure 2 for CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context
Figure 3 for CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context
Figure 4 for CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context
Viaarxiv icon

NatGen: Generative pre-training by "Naturalizing" source code

Add code
Jun 15, 2022
Figure 1 for NatGen: Generative pre-training by "Naturalizing" source code
Figure 2 for NatGen: Generative pre-training by "Naturalizing" source code
Figure 3 for NatGen: Generative pre-training by "Naturalizing" source code
Figure 4 for NatGen: Generative pre-training by "Naturalizing" source code
Viaarxiv icon

VELVET: a noVel Ensemble Learning approach to automatically locate VulnErable sTatements

Add code
Jan 13, 2022
Figure 1 for VELVET: a noVel Ensemble Learning approach to automatically locate VulnErable sTatements
Figure 2 for VELVET: a noVel Ensemble Learning approach to automatically locate VulnErable sTatements
Figure 3 for VELVET: a noVel Ensemble Learning approach to automatically locate VulnErable sTatements
Figure 4 for VELVET: a noVel Ensemble Learning approach to automatically locate VulnErable sTatements
Viaarxiv icon

Contrastive Learning for Source Code with Structural and Functional Properties

Add code
Oct 08, 2021
Figure 1 for Contrastive Learning for Source Code with Structural and Functional Properties
Figure 2 for Contrastive Learning for Source Code with Structural and Functional Properties
Figure 3 for Contrastive Learning for Source Code with Structural and Functional Properties
Figure 4 for Contrastive Learning for Source Code with Structural and Functional Properties
Viaarxiv icon

Patching as Translation: the Data and the Metaphor

Add code
Sep 01, 2020
Figure 1 for Patching as Translation: the Data and the Metaphor
Figure 2 for Patching as Translation: the Data and the Metaphor
Figure 3 for Patching as Translation: the Data and the Metaphor
Figure 4 for Patching as Translation: the Data and the Metaphor
Viaarxiv icon