Picture for Minchan Jeong

Minchan Jeong

Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL

Add code
Jul 20, 2024
Viaarxiv icon

BAPO: Base-Anchored Preference Optimization for Personalized Alignment in Large Language Models

Add code
Jun 30, 2024
Viaarxiv icon

FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning

Add code
Jun 04, 2024
Viaarxiv icon

Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning

Add code
Feb 13, 2024
Figure 1 for Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Figure 2 for Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Figure 3 for Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Figure 4 for Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Viaarxiv icon

FedSoL: Bridging Global Alignment and Local Generality in Federated Learning

Add code
Aug 24, 2023
Viaarxiv icon

Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning

Add code
Mar 03, 2023
Viaarxiv icon

Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective

Add code
Feb 03, 2023
Viaarxiv icon

Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning

Add code
Jun 06, 2021
Figure 1 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Figure 2 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Figure 3 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Figure 4 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Viaarxiv icon