Picture for Midan Shim

Midan Shim

Language Chameleon: Transformation analysis between languages using Cross-lingual Post-training based on Pre-trained language models

Add code
Sep 14, 2022
Figure 1 for Language Chameleon: Transformation analysis between languages using Cross-lingual Post-training based on Pre-trained language models
Figure 2 for Language Chameleon: Transformation analysis between languages using Cross-lingual Post-training based on Pre-trained language models
Figure 3 for Language Chameleon: Transformation analysis between languages using Cross-lingual Post-training based on Pre-trained language models
Figure 4 for Language Chameleon: Transformation analysis between languages using Cross-lingual Post-training based on Pre-trained language models
Viaarxiv icon

Empirical study on BlenderBot 2.0 Errors Analysis in terms of Model, Data and User-Centric Approach

Add code
Jan 10, 2022
Figure 1 for Empirical study on BlenderBot 2.0 Errors Analysis in terms of Model, Data and User-Centric Approach
Figure 2 for Empirical study on BlenderBot 2.0 Errors Analysis in terms of Model, Data and User-Centric Approach
Figure 3 for Empirical study on BlenderBot 2.0 Errors Analysis in terms of Model, Data and User-Centric Approach
Figure 4 for Empirical study on BlenderBot 2.0 Errors Analysis in terms of Model, Data and User-Centric Approach
Viaarxiv icon

Empirical Analysis of Korean Public AI Hub Parallel Corpora and in-depth Analysis using LIWC

Add code
Oct 28, 2021
Figure 1 for Empirical Analysis of Korean Public AI Hub Parallel Corpora and in-depth Analysis using LIWC
Figure 2 for Empirical Analysis of Korean Public AI Hub Parallel Corpora and in-depth Analysis using LIWC
Figure 3 for Empirical Analysis of Korean Public AI Hub Parallel Corpora and in-depth Analysis using LIWC
Figure 4 for Empirical Analysis of Korean Public AI Hub Parallel Corpora and in-depth Analysis using LIWC
Viaarxiv icon