Picture for Scott A. Hale

Scott A. Hale

HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter

Add code
Nov 23, 2024
Figure 1 for HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter
Figure 2 for HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter
Figure 3 for HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter
Figure 4 for HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter
Viaarxiv icon

LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages

Add code
Jun 11, 2024
Viaarxiv icon

SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks

Add code
May 17, 2024
Figure 1 for SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks
Figure 2 for SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks
Figure 3 for SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks
Viaarxiv icon

Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic

Add code
May 01, 2024
Figure 1 for Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic
Figure 2 for Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic
Figure 3 for Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic
Figure 4 for Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic
Viaarxiv icon

From Languages to Geographies: Towards Evaluating Cultural Bias in Hate Speech Datasets

Add code
Apr 27, 2024
Viaarxiv icon

The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models

Add code
Apr 24, 2024
Figure 1 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Figure 2 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Figure 3 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Figure 4 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Viaarxiv icon

Introducing v0.5 of the AI Safety Benchmark from MLCommons

Add code
Apr 18, 2024
Figure 1 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 2 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 3 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 4 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Viaarxiv icon

SimpleSafetyTests: a Test Suite for Identifying Critical Safety Risks in Large Language Models

Add code
Nov 14, 2023
Figure 1 for SimpleSafetyTests: a Test Suite for Identifying Critical Safety Risks in Large Language Models
Figure 2 for SimpleSafetyTests: a Test Suite for Identifying Critical Safety Risks in Large Language Models
Figure 3 for SimpleSafetyTests: a Test Suite for Identifying Critical Safety Risks in Large Language Models
Figure 4 for SimpleSafetyTests: a Test Suite for Identifying Critical Safety Risks in Large Language Models
Viaarxiv icon

Lost in Translation -- Multilingual Misinformation and its Evolution

Add code
Oct 27, 2023
Figure 1 for Lost in Translation -- Multilingual Misinformation and its Evolution
Figure 2 for Lost in Translation -- Multilingual Misinformation and its Evolution
Figure 3 for Lost in Translation -- Multilingual Misinformation and its Evolution
Figure 4 for Lost in Translation -- Multilingual Misinformation and its Evolution
Viaarxiv icon

The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values

Add code
Oct 11, 2023
Viaarxiv icon