Picture for Stephanie C. Y. Chan

Stephanie C. Y. Chan

Learned feature representations are biased by complexity, learning order, position, and more

Add code
May 09, 2024
Viaarxiv icon

What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation

Add code
Apr 10, 2024
Figure 1 for What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
Figure 2 for What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
Figure 3 for What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
Figure 4 for What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
Viaarxiv icon

The Transient Nature of Emergent In-Context Learning in Transformers

Add code
Nov 15, 2023
Viaarxiv icon

Transformers generalize differently from information stored in context vs in weights

Add code
Oct 11, 2022
Figure 1 for Transformers generalize differently from information stored in context vs in weights
Figure 2 for Transformers generalize differently from information stored in context vs in weights
Figure 3 for Transformers generalize differently from information stored in context vs in weights
Figure 4 for Transformers generalize differently from information stored in context vs in weights
Viaarxiv icon

Language models show human-like content effects on reasoning

Add code
Jul 14, 2022
Figure 1 for Language models show human-like content effects on reasoning
Figure 2 for Language models show human-like content effects on reasoning
Figure 3 for Language models show human-like content effects on reasoning
Figure 4 for Language models show human-like content effects on reasoning
Viaarxiv icon

Semantic Exploration from Language Abstractions and Pretrained Representations

Add code
Apr 08, 2022
Figure 1 for Semantic Exploration from Language Abstractions and Pretrained Representations
Figure 2 for Semantic Exploration from Language Abstractions and Pretrained Representations
Figure 3 for Semantic Exploration from Language Abstractions and Pretrained Representations
Figure 4 for Semantic Exploration from Language Abstractions and Pretrained Representations
Viaarxiv icon

Can language models learn from explanations in context?

Add code
Apr 05, 2022
Figure 1 for Can language models learn from explanations in context?
Figure 2 for Can language models learn from explanations in context?
Figure 3 for Can language models learn from explanations in context?
Figure 4 for Can language models learn from explanations in context?
Viaarxiv icon

Zipfian environments for Reinforcement Learning

Add code
Mar 15, 2022
Figure 1 for Zipfian environments for Reinforcement Learning
Figure 2 for Zipfian environments for Reinforcement Learning
Figure 3 for Zipfian environments for Reinforcement Learning
Figure 4 for Zipfian environments for Reinforcement Learning
Viaarxiv icon

Tell me why! -- Explanations support learning of relational and causal structure

Add code
Dec 08, 2021
Figure 1 for Tell me why! -- Explanations support learning of relational and causal structure
Figure 2 for Tell me why! -- Explanations support learning of relational and causal structure
Figure 3 for Tell me why! -- Explanations support learning of relational and causal structure
Figure 4 for Tell me why! -- Explanations support learning of relational and causal structure
Viaarxiv icon

Towards mental time travel: a hierarchical memory for reinforcement learning agents

Add code
May 28, 2021
Figure 1 for Towards mental time travel: a hierarchical memory for reinforcement learning agents
Figure 2 for Towards mental time travel: a hierarchical memory for reinforcement learning agents
Figure 3 for Towards mental time travel: a hierarchical memory for reinforcement learning agents
Figure 4 for Towards mental time travel: a hierarchical memory for reinforcement learning agents
Viaarxiv icon