Picture for Leo Wanner

Leo Wanner

Disentangling Hate Across Target Identities

Add code
Oct 14, 2024
Figure 1 for Disentangling Hate Across Target Identities
Figure 2 for Disentangling Hate Across Target Identities
Figure 3 for Disentangling Hate Across Target Identities
Figure 4 for Disentangling Hate Across Target Identities
Viaarxiv icon

GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech Detection?

Add code
Feb 23, 2024
Figure 1 for GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech Detection?
Figure 2 for GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech Detection?
Figure 3 for GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech Detection?
Figure 4 for GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech Detection?
Viaarxiv icon

User Identity Linkage in Social Media Using Linguistic and Social Interaction Features

Add code
Aug 22, 2023
Figure 1 for User Identity Linkage in Social Media Using Linguistic and Social Interaction Features
Figure 2 for User Identity Linkage in Social Media Using Linguistic and Social Interaction Features
Figure 3 for User Identity Linkage in Social Media Using Linguistic and Social Interaction Features
Figure 4 for User Identity Linkage in Social Media Using Linguistic and Social Interaction Features
Viaarxiv icon

Towards Weakly-Supervised Hate Speech Classification Across Datasets

Add code
May 04, 2023
Figure 1 for Towards Weakly-Supervised Hate Speech Classification Across Datasets
Figure 2 for Towards Weakly-Supervised Hate Speech Classification Across Datasets
Figure 3 for Towards Weakly-Supervised Hate Speech Classification Across Datasets
Figure 4 for Towards Weakly-Supervised Hate Speech Classification Across Datasets
Viaarxiv icon

Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP

Add code
May 02, 2023
Figure 1 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Figure 2 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Figure 3 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Figure 4 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Viaarxiv icon

Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers

Add code
May 23, 2022
Figure 1 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Figure 2 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Figure 3 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Figure 4 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Viaarxiv icon

How much pretraining data do language models need to learn syntax?

Add code
Sep 09, 2021
Figure 1 for How much pretraining data do language models need to learn syntax?
Figure 2 for How much pretraining data do language models need to learn syntax?
Figure 3 for How much pretraining data do language models need to learn syntax?
Figure 4 for How much pretraining data do language models need to learn syntax?
Viaarxiv icon

Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models

Add code
May 10, 2021
Figure 1 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Figure 2 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Figure 3 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Figure 4 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Viaarxiv icon

On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations

Add code
Feb 10, 2021
Figure 1 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 2 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 3 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 4 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Viaarxiv icon

Concept Extraction Using Pointer-Generator Networks

Add code
Aug 25, 2020
Figure 1 for Concept Extraction Using Pointer-Generator Networks
Figure 2 for Concept Extraction Using Pointer-Generator Networks
Figure 3 for Concept Extraction Using Pointer-Generator Networks
Figure 4 for Concept Extraction Using Pointer-Generator Networks
Viaarxiv icon