Picture for Wangchunshu Zhou

Wangchunshu Zhou

AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions

Add code
Oct 29, 2024
Figure 1 for AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions
Figure 2 for AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions
Figure 3 for AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions
Figure 4 for AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions
Viaarxiv icon

A Comparative Study on Reasoning Patterns of OpenAI's o1 Model

Add code
Oct 17, 2024
Figure 1 for A Comparative Study on Reasoning Patterns of OpenAI's o1 Model
Figure 2 for A Comparative Study on Reasoning Patterns of OpenAI's o1 Model
Figure 3 for A Comparative Study on Reasoning Patterns of OpenAI's o1 Model
Figure 4 for A Comparative Study on Reasoning Patterns of OpenAI's o1 Model
Viaarxiv icon

PopAlign: Diversifying Contrasting Patterns for a More Comprehensive Alignment

Add code
Oct 17, 2024
Viaarxiv icon

PositionID: LLMs can Control Lengths, Copy and Paste with Explicit Positional Awareness

Add code
Oct 09, 2024
Viaarxiv icon

MIO: A Foundation Model on Multimodal Tokens

Add code
Sep 26, 2024
Figure 1 for MIO: A Foundation Model on Multimodal Tokens
Figure 2 for MIO: A Foundation Model on Multimodal Tokens
Figure 3 for MIO: A Foundation Model on Multimodal Tokens
Figure 4 for MIO: A Foundation Model on Multimodal Tokens
Viaarxiv icon

HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models

Add code
Sep 24, 2024
Viaarxiv icon

Symbolic Learning Enables Self-Evolving Agents

Add code
Jun 26, 2024
Figure 1 for Symbolic Learning Enables Self-Evolving Agents
Figure 2 for Symbolic Learning Enables Self-Evolving Agents
Figure 3 for Symbolic Learning Enables Self-Evolving Agents
Figure 4 for Symbolic Learning Enables Self-Evolving Agents
Viaarxiv icon

MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series

Add code
May 29, 2024
Figure 1 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 2 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 3 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 4 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Viaarxiv icon

MIMIR: A Streamlined Platform for Personalized Agent Tuning in Domain Expertise

Add code
Apr 03, 2024
Viaarxiv icon

CIF-Bench: A Chinese Instruction-Following Benchmark for Evaluating the Generalizability of Large Language Models

Add code
Feb 20, 2024
Viaarxiv icon