Picture for Wei-Yun Ma

Wei-Yun Ma

Not All LLM-Generated Data Are Equal: Rethinking Data Weighting in Text Classification

Add code
Oct 28, 2024
Viaarxiv icon

Generating Attractive and Authentic Copywriting from Customer Reviews

Add code
Apr 22, 2024
Viaarxiv icon

Extending the Pre-Training of BLOOM for Improved Support of Traditional Chinese: Models, Methods and Results

Add code
Mar 08, 2023
Viaarxiv icon

Roof-BERT: Divide Understanding Labour and Join in Work

Add code
Dec 13, 2021
Figure 1 for Roof-BERT: Divide Understanding Labour and Join in Work
Figure 2 for Roof-BERT: Divide Understanding Labour and Join in Work
Figure 3 for Roof-BERT: Divide Understanding Labour and Join in Work
Figure 4 for Roof-BERT: Divide Understanding Labour and Join in Work
Viaarxiv icon

DCT: Dynamic Compressive Transformer for Modeling Unbounded Sequence

Add code
Oct 10, 2021
Figure 1 for DCT: Dynamic Compressive Transformer for Modeling Unbounded Sequence
Figure 2 for DCT: Dynamic Compressive Transformer for Modeling Unbounded Sequence
Figure 3 for DCT: Dynamic Compressive Transformer for Modeling Unbounded Sequence
Viaarxiv icon

H-FND: Hierarchical False-Negative Denoising for Distant Supervision Relation Extraction

Add code
Dec 14, 2020
Figure 1 for H-FND: Hierarchical False-Negative Denoising for Distant Supervision Relation Extraction
Figure 2 for H-FND: Hierarchical False-Negative Denoising for Distant Supervision Relation Extraction
Figure 3 for H-FND: Hierarchical False-Negative Denoising for Distant Supervision Relation Extraction
Figure 4 for H-FND: Hierarchical False-Negative Denoising for Distant Supervision Relation Extraction
Viaarxiv icon

Predict and Use Latent Patterns for Short-Text Conversation

Add code
Oct 27, 2020
Figure 1 for Predict and Use Latent Patterns for Short-Text Conversation
Figure 2 for Predict and Use Latent Patterns for Short-Text Conversation
Figure 3 for Predict and Use Latent Patterns for Short-Text Conversation
Figure 4 for Predict and Use Latent Patterns for Short-Text Conversation
Viaarxiv icon

Why Attention? Analyzing and Remedying BiLSTM Deficiency in Modeling Cross-Context for NER

Add code
Oct 07, 2019
Figure 1 for Why Attention? Analyzing and Remedying BiLSTM Deficiency in Modeling Cross-Context for NER
Figure 2 for Why Attention? Analyzing and Remedying BiLSTM Deficiency in Modeling Cross-Context for NER
Figure 3 for Why Attention? Analyzing and Remedying BiLSTM Deficiency in Modeling Cross-Context for NER
Figure 4 for Why Attention? Analyzing and Remedying BiLSTM Deficiency in Modeling Cross-Context for NER
Viaarxiv icon

Remedying BiLSTM-CNN Deficiency in Modeling Cross-Context for NER

Add code
Aug 29, 2019
Figure 1 for Remedying BiLSTM-CNN Deficiency in Modeling Cross-Context for NER
Figure 2 for Remedying BiLSTM-CNN Deficiency in Modeling Cross-Context for NER
Figure 3 for Remedying BiLSTM-CNN Deficiency in Modeling Cross-Context for NER
Figure 4 for Remedying BiLSTM-CNN Deficiency in Modeling Cross-Context for NER
Viaarxiv icon

CA-EHN: Commonsense Word Analogy from E-HowNet

Add code
Aug 21, 2019
Figure 1 for CA-EHN: Commonsense Word Analogy from E-HowNet
Figure 2 for CA-EHN: Commonsense Word Analogy from E-HowNet
Figure 3 for CA-EHN: Commonsense Word Analogy from E-HowNet
Figure 4 for CA-EHN: Commonsense Word Analogy from E-HowNet
Viaarxiv icon