Picture for Aohan Zeng

Aohan Zeng

VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents

Add code
Aug 12, 2024
Figure 1 for VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents
Figure 2 for VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents
Figure 3 for VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents
Figure 4 for VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents
Viaarxiv icon

ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools

Add code
Jun 18, 2024
Viaarxiv icon

ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback

Add code
Apr 03, 2024
Viaarxiv icon

ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline

Add code
Apr 03, 2024
Viaarxiv icon

Understanding Emergent Abilities of Language Models from the Loss Perspective

Add code
Mar 30, 2024
Viaarxiv icon

APAR: LLMs Can Do Auto-Parallel Auto-Regressive Decoding

Add code
Jan 12, 2024
Viaarxiv icon

xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering the Language of Protein

Add code
Jan 11, 2024
Viaarxiv icon

CritiqueLLM: Scaling LLM-as-Critic for Effective and Explainable Evaluation of Large Language Model Generation

Add code
Nov 30, 2023
Viaarxiv icon

AgentTuning: Enabling Generalized Agent Abilities for LLMs

Add code
Oct 22, 2023
Viaarxiv icon

LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding

Add code
Aug 28, 2023
Viaarxiv icon