mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation

Add code
Oct 19, 2024
Figure 1 for mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation
Figure 2 for mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation
Figure 3 for mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation
Figure 4 for mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: