Picture for Ryokan Ri

Ryokan Ri

Self-Translate-Train: A Simple but Strong Baseline for Cross-lingual Transfer of Large Language Models

Add code
Jun 29, 2024
Viaarxiv icon

Large Vocabulary Size Improves Large Language Models

Add code
Jun 24, 2024
Viaarxiv icon

LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation

Add code
Feb 18, 2024
Figure 1 for LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation
Figure 2 for LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation
Figure 3 for LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation
Figure 4 for LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation
Viaarxiv icon

Emergent Communication with Attention

Add code
May 18, 2023
Viaarxiv icon

EASE: Entity-Aware Contrastive Learning of Sentence Embedding

Add code
May 09, 2022
Figure 1 for EASE: Entity-Aware Contrastive Learning of Sentence Embedding
Figure 2 for EASE: Entity-Aware Contrastive Learning of Sentence Embedding
Figure 3 for EASE: Entity-Aware Contrastive Learning of Sentence Embedding
Figure 4 for EASE: Entity-Aware Contrastive Learning of Sentence Embedding
Viaarxiv icon

Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models

Add code
Mar 22, 2022
Figure 1 for Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models
Figure 2 for Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models
Figure 3 for Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models
Figure 4 for Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models
Viaarxiv icon

mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models

Add code
Oct 15, 2021
Figure 1 for mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models
Figure 2 for mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models
Figure 3 for mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models
Figure 4 for mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models
Viaarxiv icon

Modeling Target-side Inflection in Placeholder Translation

Add code
Jul 01, 2021
Figure 1 for Modeling Target-side Inflection in Placeholder Translation
Figure 2 for Modeling Target-side Inflection in Placeholder Translation
Figure 3 for Modeling Target-side Inflection in Placeholder Translation
Figure 4 for Modeling Target-side Inflection in Placeholder Translation
Viaarxiv icon

Zero-pronoun Data Augmentation for Japanese-to-English Translation

Add code
Jul 01, 2021
Figure 1 for Zero-pronoun Data Augmentation for Japanese-to-English Translation
Figure 2 for Zero-pronoun Data Augmentation for Japanese-to-English Translation
Figure 3 for Zero-pronoun Data Augmentation for Japanese-to-English Translation
Figure 4 for Zero-pronoun Data Augmentation for Japanese-to-English Translation
Viaarxiv icon

Document-aligned Japanese-English Conversation Parallel Corpus

Add code
Dec 11, 2020
Figure 1 for Document-aligned Japanese-English Conversation Parallel Corpus
Figure 2 for Document-aligned Japanese-English Conversation Parallel Corpus
Figure 3 for Document-aligned Japanese-English Conversation Parallel Corpus
Figure 4 for Document-aligned Japanese-English Conversation Parallel Corpus
Viaarxiv icon