Picture for Leo Wanner

Leo Wanner

Disentangling Hate Across Target Identities

Add code
Oct 14, 2024
Viaarxiv icon

GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech Detection?

Add code
Feb 23, 2024
Viaarxiv icon

User Identity Linkage in Social Media Using Linguistic and Social Interaction Features

Add code
Aug 22, 2023
Viaarxiv icon

Towards Weakly-Supervised Hate Speech Classification Across Datasets

Add code
May 04, 2023
Viaarxiv icon

Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP

Add code
May 02, 2023
Viaarxiv icon

Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers

Add code
May 23, 2022
Figure 1 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Figure 2 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Figure 3 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Figure 4 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Viaarxiv icon

How much pretraining data do language models need to learn syntax?

Add code
Sep 09, 2021
Figure 1 for How much pretraining data do language models need to learn syntax?
Figure 2 for How much pretraining data do language models need to learn syntax?
Figure 3 for How much pretraining data do language models need to learn syntax?
Figure 4 for How much pretraining data do language models need to learn syntax?
Viaarxiv icon

Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models

Add code
May 10, 2021
Figure 1 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Figure 2 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Figure 3 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Figure 4 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Viaarxiv icon

On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations

Add code
Feb 10, 2021
Figure 1 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 2 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 3 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 4 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Viaarxiv icon

Concept Extraction Using Pointer-Generator Networks

Add code
Aug 25, 2020
Figure 1 for Concept Extraction Using Pointer-Generator Networks
Figure 2 for Concept Extraction Using Pointer-Generator Networks
Figure 3 for Concept Extraction Using Pointer-Generator Networks
Figure 4 for Concept Extraction Using Pointer-Generator Networks
Viaarxiv icon