Picture for Ulme Wennberg

Ulme Wennberg

Exploring Internal Numeracy in Language Models: A Case Study on ALBERT

Add code
Apr 25, 2024
Viaarxiv icon

Wavebender GAN: An architecture for phonetically meaningful speech manipulation

Add code
Feb 22, 2022
Figure 1 for Wavebender GAN: An architecture for phonetically meaningful speech manipulation
Figure 2 for Wavebender GAN: An architecture for phonetically meaningful speech manipulation
Figure 3 for Wavebender GAN: An architecture for phonetically meaningful speech manipulation
Figure 4 for Wavebender GAN: An architecture for phonetically meaningful speech manipulation
Viaarxiv icon

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models

Add code
Jun 03, 2021
Figure 1 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Figure 2 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Figure 3 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Figure 4 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Viaarxiv icon

Entity, Relation, and Event Extraction with Contextualized Span Representations

Add code
Sep 10, 2019
Figure 1 for Entity, Relation, and Event Extraction with Contextualized Span Representations
Figure 2 for Entity, Relation, and Event Extraction with Contextualized Span Representations
Figure 3 for Entity, Relation, and Event Extraction with Contextualized Span Representations
Figure 4 for Entity, Relation, and Event Extraction with Contextualized Span Representations
Viaarxiv icon