Picture for Yilan Chen

Yilan Chen

Cross-Task Linearity Emerges in the Pretraining-Finetuning Paradigm

Add code
Feb 06, 2024
Viaarxiv icon

The Importance of Prompt Tuning for Automated Neuron Explanations

Add code
Oct 11, 2023
Viaarxiv icon

Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification

Add code
Aug 18, 2022
Figure 1 for Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification
Figure 2 for Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification
Figure 3 for Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification
Figure 4 for Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification
Viaarxiv icon

Demystify Optimization and Generalization of Over-parameterized PAC-Bayesian Learning

Add code
Feb 04, 2022
Figure 1 for Demystify Optimization and Generalization of Over-parameterized PAC-Bayesian Learning
Figure 2 for Demystify Optimization and Generalization of Over-parameterized PAC-Bayesian Learning
Figure 3 for Demystify Optimization and Generalization of Over-parameterized PAC-Bayesian Learning
Viaarxiv icon

On the Equivalence between Neural Network and Support Vector Machine

Add code
Nov 11, 2021
Figure 1 for On the Equivalence between Neural Network and Support Vector Machine
Figure 2 for On the Equivalence between Neural Network and Support Vector Machine
Figure 3 for On the Equivalence between Neural Network and Support Vector Machine
Figure 4 for On the Equivalence between Neural Network and Support Vector Machine
Viaarxiv icon

Explaining Knowledge Distillation by Quantifying the Knowledge

Add code
Mar 07, 2020
Figure 1 for Explaining Knowledge Distillation by Quantifying the Knowledge
Figure 2 for Explaining Knowledge Distillation by Quantifying the Knowledge
Figure 3 for Explaining Knowledge Distillation by Quantifying the Knowledge
Figure 4 for Explaining Knowledge Distillation by Quantifying the Knowledge
Viaarxiv icon