mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation

Add code
Oct 19, 2024

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: