Picture for Seongho Joe

Seongho Joe

Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment

Add code
Jul 31, 2024
Figure 1 for Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment
Figure 2 for Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment
Figure 3 for Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment
Figure 4 for Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment
Viaarxiv icon

Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models

Add code
Feb 23, 2024
Figure 1 for Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models
Figure 2 for Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models
Figure 3 for Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models
Figure 4 for Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models
Viaarxiv icon

Is Cross-modal Information Retrieval Possible without Training?

Add code
Apr 20, 2023
Viaarxiv icon

ContraCluster: Learning to Classify without Labels by Contrastive Self-Supervision and Prototype-Based Semi-Supervision

Add code
Apr 19, 2023
Figure 1 for ContraCluster: Learning to Classify without Labels by Contrastive Self-Supervision and Prototype-Based Semi-Supervision
Figure 2 for ContraCluster: Learning to Classify without Labels by Contrastive Self-Supervision and Prototype-Based Semi-Supervision
Figure 3 for ContraCluster: Learning to Classify without Labels by Contrastive Self-Supervision and Prototype-Based Semi-Supervision
Figure 4 for ContraCluster: Learning to Classify without Labels by Contrastive Self-Supervision and Prototype-Based Semi-Supervision
Viaarxiv icon

Shuffle & Divide: Contrastive Learning for Long Text

Add code
Apr 19, 2023
Viaarxiv icon

BiHPF: Bilateral High-Pass Filters for Robust Deepfake Detection

Add code
Aug 16, 2021
Figure 1 for BiHPF: Bilateral High-Pass Filters for Robust Deepfake Detection
Figure 2 for BiHPF: Bilateral High-Pass Filters for Robust Deepfake Detection
Figure 3 for BiHPF: Bilateral High-Pass Filters for Robust Deepfake Detection
Figure 4 for BiHPF: Bilateral High-Pass Filters for Robust Deepfake Detection
Viaarxiv icon

KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding

Add code
Jan 27, 2021
Figure 1 for KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding
Figure 2 for KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding
Figure 3 for KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding
Figure 4 for KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding
Viaarxiv icon

Analyzing Zero-shot Cross-lingual Transfer in Supervised NLP Tasks

Add code
Jan 26, 2021
Figure 1 for Analyzing Zero-shot Cross-lingual Transfer in Supervised NLP Tasks
Figure 2 for Analyzing Zero-shot Cross-lingual Transfer in Supervised NLP Tasks
Figure 3 for Analyzing Zero-shot Cross-lingual Transfer in Supervised NLP Tasks
Figure 4 for Analyzing Zero-shot Cross-lingual Transfer in Supervised NLP Tasks
Viaarxiv icon

Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks

Add code
Jan 26, 2021
Figure 1 for Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks
Figure 2 for Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks
Figure 3 for Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks
Figure 4 for Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks
Viaarxiv icon

SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning

Add code
Jan 16, 2021
Figure 1 for SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning
Figure 2 for SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning
Figure 3 for SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning
Figure 4 for SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning
Viaarxiv icon