Picture for Gihun Lee

Gihun Lee

Learning to Summarize from LLM-generated Feedback

Add code
Oct 17, 2024
Viaarxiv icon

BAPO: Base-Anchored Preference Optimization for Personalized Alignment in Large Language Models

Add code
Jun 30, 2024
Viaarxiv icon

FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning

Add code
Nov 22, 2023
Viaarxiv icon

Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions

Add code
Nov 01, 2023
Figure 1 for Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions
Figure 2 for Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions
Figure 3 for Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions
Figure 4 for Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions
Viaarxiv icon

FedSoL: Bridging Global Alignment and Local Generality in Federated Learning

Add code
Aug 24, 2023
Viaarxiv icon

The Multi-modality Cell Segmentation Challenge: Towards Universal Solutions

Add code
Aug 10, 2023
Viaarxiv icon

MEDIAR: Harmony of Data-Centric and Model-Centric for Multi-Modality Microscopy

Add code
Dec 07, 2022
Viaarxiv icon

Self-Contrastive Learning

Add code
Jul 14, 2021
Figure 1 for Self-Contrastive Learning
Figure 2 for Self-Contrastive Learning
Figure 3 for Self-Contrastive Learning
Figure 4 for Self-Contrastive Learning
Viaarxiv icon

Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning

Add code
Jun 06, 2021
Figure 1 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Figure 2 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Figure 3 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Figure 4 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Viaarxiv icon

MixCo: Mix-up Contrastive Learning for Visual Representation

Add code
Oct 13, 2020
Figure 1 for MixCo: Mix-up Contrastive Learning for Visual Representation
Figure 2 for MixCo: Mix-up Contrastive Learning for Visual Representation
Figure 3 for MixCo: Mix-up Contrastive Learning for Visual Representation
Figure 4 for MixCo: Mix-up Contrastive Learning for Visual Representation
Viaarxiv icon