Picture for Robert Frank

Robert Frank

Shammie

Is In-Context Learning a Type of Gradient-Based Learning? Evidence from the Inverse Frequency Effect in Structural Priming

Add code
Jun 26, 2024
Viaarxiv icon

LIEDER: Linguistically-Informed Evaluation for Discourse Entity Recognition

Add code
Mar 10, 2024
Viaarxiv icon

How Abstract Is Linguistic Generalization in Large Language Models? Experiments with Argument Structure

Add code
Nov 08, 2023
Viaarxiv icon

False perspectives on human language: why statistics needs linguistics

Add code
Feb 17, 2023
Viaarxiv icon

How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech

Add code
Jan 26, 2023
Viaarxiv icon

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Add code
Jun 10, 2022
Viaarxiv icon

Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity

Add code
Apr 13, 2022
Figure 1 for Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity
Viaarxiv icon

Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models

Add code
Mar 17, 2022
Figure 1 for Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Figure 2 for Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Figure 3 for Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Figure 4 for Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Viaarxiv icon

Do Language Models Learn Position-Role Mappings?

Add code
Feb 08, 2022
Viaarxiv icon

Transformers Generalize Linearly

Add code
Sep 24, 2021
Figure 1 for Transformers Generalize Linearly
Figure 2 for Transformers Generalize Linearly
Figure 3 for Transformers Generalize Linearly
Viaarxiv icon