Picture for Matej Ulčar

Matej Ulčar

Sequence to sequence pretraining for a less-resourced Slovenian language

Add code
Jul 28, 2022
Figure 1 for Sequence to sequence pretraining for a less-resourced Slovenian language
Figure 2 for Sequence to sequence pretraining for a less-resourced Slovenian language
Figure 3 for Sequence to sequence pretraining for a less-resourced Slovenian language
Figure 4 for Sequence to sequence pretraining for a less-resourced Slovenian language
Viaarxiv icon

Training dataset and dictionary sizes matter in BERT models: the case of Baltic languages

Add code
Dec 20, 2021
Figure 1 for Training dataset and dictionary sizes matter in BERT models: the case of Baltic languages
Figure 2 for Training dataset and dictionary sizes matter in BERT models: the case of Baltic languages
Figure 3 for Training dataset and dictionary sizes matter in BERT models: the case of Baltic languages
Figure 4 for Training dataset and dictionary sizes matter in BERT models: the case of Baltic languages
Viaarxiv icon

Cross-lingual alignments of ELMo contextual embeddings

Add code
Jul 22, 2021
Figure 1 for Cross-lingual alignments of ELMo contextual embeddings
Figure 2 for Cross-lingual alignments of ELMo contextual embeddings
Figure 3 for Cross-lingual alignments of ELMo contextual embeddings
Figure 4 for Cross-lingual alignments of ELMo contextual embeddings
Viaarxiv icon

Evaluation of contextual embeddings on less-resourced languages

Add code
Jul 22, 2021
Figure 1 for Evaluation of contextual embeddings on less-resourced languages
Figure 2 for Evaluation of contextual embeddings on less-resourced languages
Figure 3 for Evaluation of contextual embeddings on less-resourced languages
Figure 4 for Evaluation of contextual embeddings on less-resourced languages
Viaarxiv icon

FinEst BERT and CroSloEngual BERT: less is more in multilingual models

Add code
Jun 14, 2020
Figure 1 for FinEst BERT and CroSloEngual BERT: less is more in multilingual models
Figure 2 for FinEst BERT and CroSloEngual BERT: less is more in multilingual models
Figure 3 for FinEst BERT and CroSloEngual BERT: less is more in multilingual models
Figure 4 for FinEst BERT and CroSloEngual BERT: less is more in multilingual models
Viaarxiv icon

CoSimLex: A Resource for Evaluating Graded Word Similarity in Context

Add code
Dec 18, 2019
Figure 1 for CoSimLex: A Resource for Evaluating Graded Word Similarity in Context
Figure 2 for CoSimLex: A Resource for Evaluating Graded Word Similarity in Context
Figure 3 for CoSimLex: A Resource for Evaluating Graded Word Similarity in Context
Figure 4 for CoSimLex: A Resource for Evaluating Graded Word Similarity in Context
Viaarxiv icon

High Quality ELMo Embeddings for Seven Less-Resourced Languages

Add code
Nov 22, 2019
Figure 1 for High Quality ELMo Embeddings for Seven Less-Resourced Languages
Figure 2 for High Quality ELMo Embeddings for Seven Less-Resourced Languages
Figure 3 for High Quality ELMo Embeddings for Seven Less-Resourced Languages
Figure 4 for High Quality ELMo Embeddings for Seven Less-Resourced Languages
Viaarxiv icon

Multilingual Culture-Independent Word Analogy Datasets

Add code
Nov 22, 2019
Figure 1 for Multilingual Culture-Independent Word Analogy Datasets
Figure 2 for Multilingual Culture-Independent Word Analogy Datasets
Figure 3 for Multilingual Culture-Independent Word Analogy Datasets
Figure 4 for Multilingual Culture-Independent Word Analogy Datasets
Viaarxiv icon