Picture for Ziqian Lin

Ziqian Lin

Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition

Add code
Oct 08, 2024
Figure 1 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 2 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 3 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 4 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Viaarxiv icon

Dual Operating Modes of In-Context Learning

Add code
Feb 29, 2024
Viaarxiv icon

Pre-trained Recommender Systems: A Causal Debiasing Perspective

Add code
Oct 30, 2023
Viaarxiv icon

LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks

Add code
Jun 15, 2022
Figure 1 for LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks
Figure 2 for LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks
Figure 3 for LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks
Figure 4 for LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks
Viaarxiv icon

MOOD: Multi-level Out-of-distribution Detection

Add code
Apr 30, 2021
Figure 1 for MOOD: Multi-level Out-of-distribution Detection
Figure 2 for MOOD: Multi-level Out-of-distribution Detection
Figure 3 for MOOD: Multi-level Out-of-distribution Detection
Figure 4 for MOOD: Multi-level Out-of-distribution Detection
Viaarxiv icon