Picture for Angeliki Giannou

Angeliki Giannou

Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition

Add code
Oct 08, 2024
Figure 1 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 2 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 3 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 4 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Viaarxiv icon

How Well Can Transformers Emulate In-context Newton's Method?

Add code
Mar 05, 2024
Viaarxiv icon

Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements

Add code
Jun 28, 2023
Viaarxiv icon

Dissecting Chain-of-Thought: A Study on Compositional In-Context Learning of MLPs

Add code
May 30, 2023
Figure 1 for Dissecting Chain-of-Thought: A Study on Compositional In-Context Learning of MLPs
Figure 2 for Dissecting Chain-of-Thought: A Study on Compositional In-Context Learning of MLPs
Figure 3 for Dissecting Chain-of-Thought: A Study on Compositional In-Context Learning of MLPs
Figure 4 for Dissecting Chain-of-Thought: A Study on Compositional In-Context Learning of MLPs
Viaarxiv icon

The Expressive Power of Tuning Only the Norm Layers

Add code
Feb 15, 2023
Viaarxiv icon

Looped Transformers as Programmable Computers

Add code
Jan 30, 2023
Viaarxiv icon

On the convergence of policy gradient methods to Nash equilibria in general stochastic games

Add code
Oct 17, 2022
Viaarxiv icon

Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information

Add code
Feb 04, 2021
Figure 1 for Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information
Figure 2 for Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information
Viaarxiv icon