Picture for Lisa Bylinina

Lisa Bylinina

Are BabyLMs Second Language Learners?

Add code
Oct 28, 2024
Viaarxiv icon

Individuation in Neural Models with and without Visual Grounding

Add code
Sep 27, 2024
Viaarxiv icon

Black Big Boxes: Do Language Models Hide a Theory of Adjective Order?

Add code
Jul 02, 2024
Viaarxiv icon

Too Much Information: Keeping Training Simple for BabyLMs

Add code
Nov 03, 2023
Viaarxiv icon

Leverage Points in Modality Shifts: Comparing Language-only and Multimodal Word Representations

Add code
Jun 04, 2023
Viaarxiv icon

Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models

Add code
Sep 13, 2021
Figure 1 for Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models
Figure 2 for Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models
Figure 3 for Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models
Figure 4 for Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models
Viaarxiv icon

Transformers in the loop: Polarity in neural models of language

Add code
Sep 08, 2021
Figure 1 for Transformers in the loop: Polarity in neural models of language
Figure 2 for Transformers in the loop: Polarity in neural models of language
Figure 3 for Transformers in the loop: Polarity in neural models of language
Figure 4 for Transformers in the loop: Polarity in neural models of language
Viaarxiv icon