Picture for Juyeon Heo

Juyeon Heo

Do LLMs "know" internally when they follow instructions?

Add code
Oct 22, 2024
Figure 1 for Do LLMs "know" internally when they follow instructions?
Figure 2 for Do LLMs "know" internally when they follow instructions?
Figure 3 for Do LLMs "know" internally when they follow instructions?
Figure 4 for Do LLMs "know" internally when they follow instructions?
Viaarxiv icon

Do LLMs estimate uncertainty well in instruction-following?

Add code
Oct 18, 2024
Viaarxiv icon

On Evaluating LLMs' Capabilities as Functional Approximators: A Bayesian Perspective

Add code
Oct 06, 2024
Viaarxiv icon

Do Concept Bottleneck Models Obey Locality?

Add code
Jan 02, 2024
Viaarxiv icon

Estimation of Concept Explanations Should be Uncertainty Aware

Add code
Dec 13, 2023
Figure 1 for Estimation of Concept Explanations Should be Uncertainty Aware
Figure 2 for Estimation of Concept Explanations Should be Uncertainty Aware
Figure 3 for Estimation of Concept Explanations Should be Uncertainty Aware
Figure 4 for Estimation of Concept Explanations Should be Uncertainty Aware
Viaarxiv icon

Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization

Add code
Nov 10, 2023
Figure 1 for Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Figure 2 for Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Figure 3 for Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Figure 4 for Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Viaarxiv icon

Leveraging Task Structures for Improved Identifiability in Neural Network Representations

Add code
Jun 26, 2023
Viaarxiv icon

Robust Learning from Explanations

Add code
Mar 11, 2023
Viaarxiv icon

Robust Explanation Constraints for Neural Networks

Add code
Dec 16, 2022
Viaarxiv icon

Towards More Robust Interpretation via Local Gradient Alignment

Add code
Dec 07, 2022
Viaarxiv icon