Picture for Gabriel Recchia

Gabriel Recchia

Quinn

FindTheFlaws: Annotated Errors for Detecting Flawed Reasoning and Scalable Oversight Research

Add code
Mar 29, 2025
Viaarxiv icon

Humanity's Last Exam

Add code
Jan 24, 2025
Viaarxiv icon

Foundational Challenges in Assuring Alignment and Safety of Large Language Models

Add code
Apr 15, 2024
Figure 1 for Foundational Challenges in Assuring Alignment and Safety of Large Language Models
Figure 2 for Foundational Challenges in Assuring Alignment and Safety of Large Language Models
Figure 3 for Foundational Challenges in Assuring Alignment and Safety of Large Language Models
Figure 4 for Foundational Challenges in Assuring Alignment and Safety of Large Language Models
Viaarxiv icon

Inverse Scaling: When Bigger Isn't Better

Add code
Jun 15, 2023
Figure 1 for Inverse Scaling: When Bigger Isn't Better
Figure 2 for Inverse Scaling: When Bigger Isn't Better
Figure 3 for Inverse Scaling: When Bigger Isn't Better
Figure 4 for Inverse Scaling: When Bigger Isn't Better
Viaarxiv icon

Teaching Autoregressive Language Models Complex Tasks By Demonstration

Add code
Sep 11, 2021
Figure 1 for Teaching Autoregressive Language Models Complex Tasks By Demonstration
Figure 2 for Teaching Autoregressive Language Models Complex Tasks By Demonstration
Figure 3 for Teaching Autoregressive Language Models Complex Tasks By Demonstration
Figure 4 for Teaching Autoregressive Language Models Complex Tasks By Demonstration
Viaarxiv icon