Picture for Ting-Yun Chang

Ting-Yun Chang

When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models

Add code
Jun 19, 2024
Figure 1 for When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models
Figure 2 for When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models
Figure 3 for When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models
Figure 4 for When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models
Viaarxiv icon

Do Localization Methods Actually Localize Memorized Data in LLMs?

Add code
Nov 15, 2023
Figure 1 for Do Localization Methods Actually Localize Memorized Data in LLMs?
Figure 2 for Do Localization Methods Actually Localize Memorized Data in LLMs?
Figure 3 for Do Localization Methods Actually Localize Memorized Data in LLMs?
Figure 4 for Do Localization Methods Actually Localize Memorized Data in LLMs?
Viaarxiv icon

Careful Data Curation Stabilizes In-context Learning

Add code
Dec 20, 2022
Viaarxiv icon

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

Add code
Jun 18, 2022
Figure 1 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 2 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 3 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 4 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Viaarxiv icon

Rethinking Why Intermediate-Task Fine-Tuning Works

Add code
Sep 01, 2021
Figure 1 for Rethinking Why Intermediate-Task Fine-Tuning Works
Figure 2 for Rethinking Why Intermediate-Task Fine-Tuning Works
Figure 3 for Rethinking Why Intermediate-Task Fine-Tuning Works
Figure 4 for Rethinking Why Intermediate-Task Fine-Tuning Works
Viaarxiv icon

Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense

Add code
May 12, 2021
Figure 1 for Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense
Figure 2 for Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense
Figure 3 for Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense
Figure 4 for Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense
Viaarxiv icon

Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks

Add code
May 12, 2021
Figure 1 for Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks
Figure 2 for Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks
Figure 3 for Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks
Figure 4 for Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks
Viaarxiv icon

TinyGAN: Distilling BigGAN for Conditional Image Generation

Add code
Sep 29, 2020
Figure 1 for TinyGAN: Distilling BigGAN for Conditional Image Generation
Figure 2 for TinyGAN: Distilling BigGAN for Conditional Image Generation
Figure 3 for TinyGAN: Distilling BigGAN for Conditional Image Generation
Figure 4 for TinyGAN: Distilling BigGAN for Conditional Image Generation
Viaarxiv icon

xSense: Learning Sense-Separated Sparse Representations and Textual Definitions for Explainable Word Sense Networks

Add code
Sep 10, 2018
Figure 1 for xSense: Learning Sense-Separated Sparse Representations and Textual Definitions for Explainable Word Sense Networks
Figure 2 for xSense: Learning Sense-Separated Sparse Representations and Textual Definitions for Explainable Word Sense Networks
Figure 3 for xSense: Learning Sense-Separated Sparse Representations and Textual Definitions for Explainable Word Sense Networks
Figure 4 for xSense: Learning Sense-Separated Sparse Representations and Textual Definitions for Explainable Word Sense Networks
Viaarxiv icon