Picture for Jong-Hyeok Lee

Jong-Hyeok Lee

POSTECH, Korea

Bring More Attention to Syntactic Symmetry for Automatic Postediting of High-Quality Machine Translations

Add code
May 17, 2023
Viaarxiv icon

Towards Semi-Supervised Learning of Automatic Post-Editing: Data-Synthesis by Infilling Mask with Erroneous Tokens

Add code
Apr 08, 2022
Figure 1 for Towards Semi-Supervised Learning of Automatic Post-Editing: Data-Synthesis by Infilling Mask with Erroneous Tokens
Figure 2 for Towards Semi-Supervised Learning of Automatic Post-Editing: Data-Synthesis by Infilling Mask with Erroneous Tokens
Figure 3 for Towards Semi-Supervised Learning of Automatic Post-Editing: Data-Synthesis by Infilling Mask with Erroneous Tokens
Figure 4 for Towards Semi-Supervised Learning of Automatic Post-Editing: Data-Synthesis by Infilling Mask with Erroneous Tokens
Viaarxiv icon

mcBERT: Momentum Contrastive Learning with BERT for Zero-Shot Slot Filling

Add code
Mar 24, 2022
Figure 1 for mcBERT: Momentum Contrastive Learning with BERT for Zero-Shot Slot Filling
Figure 2 for mcBERT: Momentum Contrastive Learning with BERT for Zero-Shot Slot Filling
Figure 3 for mcBERT: Momentum Contrastive Learning with BERT for Zero-Shot Slot Filling
Figure 4 for mcBERT: Momentum Contrastive Learning with BERT for Zero-Shot Slot Filling
Viaarxiv icon

Modeling Inter-Speaker Relationship in XLNet for Contextual Spoken Language Understanding

Add code
Oct 28, 2019
Figure 1 for Modeling Inter-Speaker Relationship in XLNet for Contextual Spoken Language Understanding
Figure 2 for Modeling Inter-Speaker Relationship in XLNet for Contextual Spoken Language Understanding
Figure 3 for Modeling Inter-Speaker Relationship in XLNet for Contextual Spoken Language Understanding
Viaarxiv icon

Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs

Add code
Aug 15, 2019
Figure 1 for Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs
Figure 2 for Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs
Figure 3 for Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs
Figure 4 for Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs
Viaarxiv icon

Decay-Function-Free Time-Aware Attention to Context and Speaker Indicator for Spoken Language Understanding

Add code
Mar 29, 2019
Figure 1 for Decay-Function-Free Time-Aware Attention to Context and Speaker Indicator for Spoken Language Understanding
Figure 2 for Decay-Function-Free Time-Aware Attention to Context and Speaker Indicator for Spoken Language Understanding
Figure 3 for Decay-Function-Free Time-Aware Attention to Context and Speaker Indicator for Spoken Language Understanding
Figure 4 for Decay-Function-Free Time-Aware Attention to Context and Speaker Indicator for Spoken Language Understanding
Viaarxiv icon

Self-Attention-Based Message-Relevant Response Generation for Neural Conversation Model

Add code
May 23, 2018
Figure 1 for Self-Attention-Based Message-Relevant Response Generation for Neural Conversation Model
Figure 2 for Self-Attention-Based Message-Relevant Response Generation for Neural Conversation Model
Figure 3 for Self-Attention-Based Message-Relevant Response Generation for Neural Conversation Model
Figure 4 for Self-Attention-Based Message-Relevant Response Generation for Neural Conversation Model
Viaarxiv icon

Multiple Range-Restricted Bidirectional Gated Recurrent Units with Attention for Relation Classification

Add code
Nov 01, 2017
Figure 1 for Multiple Range-Restricted Bidirectional Gated Recurrent Units with Attention for Relation Classification
Figure 2 for Multiple Range-Restricted Bidirectional Gated Recurrent Units with Attention for Relation Classification
Viaarxiv icon

Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches

Add code
Feb 08, 2015
Figure 1 for Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches
Figure 2 for Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches
Viaarxiv icon

Unlimited Vocabulary Grapheme to Phoneme Conversion for Korean TTS

Add code
Jun 10, 1998
Figure 1 for Unlimited Vocabulary Grapheme to Phoneme Conversion for Korean TTS
Figure 2 for Unlimited Vocabulary Grapheme to Phoneme Conversion for Korean TTS
Figure 3 for Unlimited Vocabulary Grapheme to Phoneme Conversion for Korean TTS
Figure 4 for Unlimited Vocabulary Grapheme to Phoneme Conversion for Korean TTS
Viaarxiv icon