Picture for Seungone Kim

Seungone Kim

MM-Eval: A Multilingual Meta-Evaluation Benchmark for LLM-as-a-Judge and Reward Models

Add code
Oct 23, 2024
Viaarxiv icon

Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages

Add code
Oct 21, 2024
Viaarxiv icon

Better Instruction-Following Through Minimum Bayes Risk

Add code
Oct 07, 2024
Viaarxiv icon

Consent in Crisis: The Rapid Decline of the AI Data Commons

Add code
Jul 24, 2024
Viaarxiv icon

Can Language Models Evaluate Human Written Text? Case Study on Korean Student Writing for Education

Add code
Jul 24, 2024
Viaarxiv icon

The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models

Add code
Jun 09, 2024
Figure 1 for The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
Figure 2 for The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
Figure 3 for The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
Figure 4 for The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
Viaarxiv icon

Aligning to Thousands of Preferences via System Message Generalization

Add code
May 28, 2024
Viaarxiv icon

Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models

Add code
May 02, 2024
Viaarxiv icon

Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards

Add code
Apr 16, 2024
Viaarxiv icon

Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models

Add code
Apr 03, 2024
Viaarxiv icon