Picture for Zhenzhu Yang

Zhenzhu Yang

I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm

Add code
Aug 15, 2024
Figure 1 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Figure 2 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Figure 3 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Figure 4 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Viaarxiv icon

MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series

Add code
May 29, 2024
Figure 1 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 2 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 3 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 4 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Viaarxiv icon

MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI

Add code
Nov 27, 2023
Figure 1 for MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
Figure 2 for MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
Figure 3 for MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
Figure 4 for MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
Viaarxiv icon

Interactive Natural Language Processing

Add code
May 22, 2023
Figure 1 for Interactive Natural Language Processing
Figure 2 for Interactive Natural Language Processing
Figure 3 for Interactive Natural Language Processing
Figure 4 for Interactive Natural Language Processing
Viaarxiv icon