Picture for Chenchen Zhang

Chenchen Zhang

OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models

Add code
Nov 07, 2024
Figure 1 for OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models
Figure 2 for OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models
Figure 3 for OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models
Figure 4 for OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models
Viaarxiv icon

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

Add code
Nov 05, 2024
Figure 1 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 2 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 3 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 4 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Viaarxiv icon

MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models

Add code
Oct 15, 2024
Figure 1 for MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models
Figure 2 for MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models
Figure 3 for MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models
Figure 4 for MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models
Viaarxiv icon

DDK: Distilling Domain Knowledge for Efficient Large Language Models

Add code
Jul 23, 2024
Figure 1 for DDK: Distilling Domain Knowledge for Efficient Large Language Models
Figure 2 for DDK: Distilling Domain Knowledge for Efficient Large Language Models
Figure 3 for DDK: Distilling Domain Knowledge for Efficient Large Language Models
Figure 4 for DDK: Distilling Domain Knowledge for Efficient Large Language Models
Viaarxiv icon

LongIns: A Challenging Long-context Instruction-based Exam for LLMs

Add code
Jun 26, 2024
Viaarxiv icon

GIEBench: Towards Holistic Evaluation of Group Identity-based Empathy for Large Language Models

Add code
Jun 24, 2024
Figure 1 for GIEBench: Towards Holistic Evaluation of Group Identity-based Empathy for Large Language Models
Figure 2 for GIEBench: Towards Holistic Evaluation of Group Identity-based Empathy for Large Language Models
Figure 3 for GIEBench: Towards Holistic Evaluation of Group Identity-based Empathy for Large Language Models
Figure 4 for GIEBench: Towards Holistic Evaluation of Group Identity-based Empathy for Large Language Models
Viaarxiv icon

R2C2-Coder: Enhancing and Benchmarking Real-world Repository-level Code Completion Abilities of Code Large Language Models

Add code
Jun 04, 2024
Figure 1 for R2C2-Coder: Enhancing and Benchmarking Real-world Repository-level Code Completion Abilities of Code Large Language Models
Figure 2 for R2C2-Coder: Enhancing and Benchmarking Real-world Repository-level Code Completion Abilities of Code Large Language Models
Figure 3 for R2C2-Coder: Enhancing and Benchmarking Real-world Repository-level Code Completion Abilities of Code Large Language Models
Figure 4 for R2C2-Coder: Enhancing and Benchmarking Real-world Repository-level Code Completion Abilities of Code Large Language Models
Viaarxiv icon

D-CPT Law: Domain-specific Continual Pre-Training Scaling Law for Large Language Models

Add code
Jun 03, 2024
Viaarxiv icon

MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series

Add code
May 29, 2024
Figure 1 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 2 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 3 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 4 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Viaarxiv icon

ConceptMath: A Bilingual Concept-wise Benchmark for Measuring Mathematical Reasoning of Large Language Models

Add code
Feb 23, 2024
Figure 1 for ConceptMath: A Bilingual Concept-wise Benchmark for Measuring Mathematical Reasoning of Large Language Models
Figure 2 for ConceptMath: A Bilingual Concept-wise Benchmark for Measuring Mathematical Reasoning of Large Language Models
Figure 3 for ConceptMath: A Bilingual Concept-wise Benchmark for Measuring Mathematical Reasoning of Large Language Models
Figure 4 for ConceptMath: A Bilingual Concept-wise Benchmark for Measuring Mathematical Reasoning of Large Language Models
Viaarxiv icon