Picture for Valentina Pyatkin

Valentina Pyatkin

Which course? Discourse! Teaching Discourse and Generation in the Era of LLMs

Add code
Feb 02, 2026
Viaarxiv icon

PluriHarms: Benchmarking the Full Spectrum of Human Judgments on AI Harm

Add code
Jan 13, 2026
Viaarxiv icon

Olmo 3

Add code
Dec 15, 2025
Viaarxiv icon

Generalizing Verifiable Instruction Following

Add code
Jul 03, 2025
Viaarxiv icon

Is It JUST Semantics? A Case Study of Discourse Particle Understanding in LLMs

Add code
Jun 05, 2025
Viaarxiv icon

IssueBench: Millions of Realistic Prompts for Measuring Issue Bias in LLM Writing Assistance

Add code
Feb 12, 2025
Viaarxiv icon

2 OLMo 2 Furious

Add code
Dec 31, 2024
Figure 1 for 2 OLMo 2 Furious
Figure 2 for 2 OLMo 2 Furious
Figure 3 for 2 OLMo 2 Furious
Figure 4 for 2 OLMo 2 Furious
Viaarxiv icon

TÜLU 3: Pushing Frontiers in Open Language Model Post-Training

Add code
Nov 22, 2024
Figure 1 for TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Figure 2 for TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Figure 3 for TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Figure 4 for TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Viaarxiv icon

Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback

Add code
Oct 24, 2024
Figure 1 for Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback
Figure 2 for Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback
Figure 3 for Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback
Figure 4 for Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback
Viaarxiv icon

SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation

Add code
Oct 22, 2024
Figure 1 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Figure 2 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Figure 3 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Figure 4 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Viaarxiv icon