Picture for Tal Linzen

Tal Linzen

Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora

Add code
Dec 06, 2024
Viaarxiv icon

What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length

Add code
Nov 04, 2024
Figure 1 for What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
Figure 2 for What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
Figure 3 for What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
Figure 4 for What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
Viaarxiv icon

How Does Code Pretraining Affect Language Model Task Performance?

Add code
Sep 06, 2024
Figure 1 for How Does Code Pretraining Affect Language Model Task Performance?
Figure 2 for How Does Code Pretraining Affect Language Model Task Performance?
Figure 3 for How Does Code Pretraining Affect Language Model Task Performance?
Figure 4 for How Does Code Pretraining Affect Language Model Task Performance?
Viaarxiv icon

Testing learning hypotheses using neural networks by manipulating learning data

Add code
Jul 05, 2024
Viaarxiv icon

[Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

Add code
Apr 09, 2024
Figure 1 for [Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus
Viaarxiv icon

SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser

Add code
Mar 11, 2024
Figure 1 for SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser
Figure 2 for SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser
Figure 3 for SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser
Figure 4 for SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser
Viaarxiv icon

Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment

Add code
Feb 29, 2024
Figure 1 for Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment
Figure 2 for Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment
Figure 3 for Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment
Figure 4 for Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment
Viaarxiv icon

In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax

Add code
Nov 13, 2023
Figure 1 for In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
Figure 2 for In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
Figure 3 for In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
Figure 4 for In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
Viaarxiv icon

A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models

Add code
Nov 01, 2023
Figure 1 for A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models
Figure 2 for A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models
Figure 3 for A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models
Figure 4 for A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models
Viaarxiv icon

The Impact of Depth and Width on Transformer Language Model Generalization

Add code
Oct 30, 2023
Figure 1 for The Impact of Depth and Width on Transformer Language Model Generalization
Figure 2 for The Impact of Depth and Width on Transformer Language Model Generalization
Figure 3 for The Impact of Depth and Width on Transformer Language Model Generalization
Figure 4 for The Impact of Depth and Width on Transformer Language Model Generalization
Viaarxiv icon