Picture for SangKeun Lee

SangKeun Lee

Korea University

C2A: Client-Customized Adaptation for Parameter-Efficient Federated Learning

Add code
Nov 01, 2024
Viaarxiv icon

CleaR: Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning

Add code
Oct 31, 2024
Viaarxiv icon

MELT: Materials-aware Continued Pre-training for Language Model Adaptation to Materials Science

Add code
Oct 19, 2024
Viaarxiv icon

Zero-shot Commonsense Reasoning over Machine Imagination

Add code
Oct 12, 2024
Viaarxiv icon

Mentor-KD: Making Small Language Models Better Multi-step Reasoners

Add code
Oct 11, 2024
Figure 1 for Mentor-KD: Making Small Language Models Better Multi-step Reasoners
Figure 2 for Mentor-KD: Making Small Language Models Better Multi-step Reasoners
Figure 3 for Mentor-KD: Making Small Language Models Better Multi-step Reasoners
Figure 4 for Mentor-KD: Making Small Language Models Better Multi-step Reasoners
Viaarxiv icon

DIVE: Towards Descriptive and Diverse Visual Commonsense Generation

Add code
Aug 15, 2024
Viaarxiv icon

Improving Bias Mitigation through Bias Experts in Natural Language Understanding

Add code
Dec 06, 2023
Viaarxiv icon

Dynamic Structure Pruning for Compressing CNNs

Add code
Mar 17, 2023
Viaarxiv icon

Efficient Pre-training of Masked Language Model via Concept-based Curriculum Masking

Add code
Dec 15, 2022
Viaarxiv icon

Dynamic Self-Attention : Computing Attention over Words Dynamically for Sentence Embedding

Add code
Aug 22, 2018
Figure 1 for Dynamic Self-Attention : Computing Attention over Words Dynamically for Sentence Embedding
Figure 2 for Dynamic Self-Attention : Computing Attention over Words Dynamically for Sentence Embedding
Figure 3 for Dynamic Self-Attention : Computing Attention over Words Dynamically for Sentence Embedding
Figure 4 for Dynamic Self-Attention : Computing Attention over Words Dynamically for Sentence Embedding
Viaarxiv icon