Picture for Yohei Oseki

Yohei Oseki

Is Structure Dependence Shaped for Efficient Communication?: A Case Study on Coordination

Add code
Oct 14, 2024
Viaarxiv icon

Can Language Models Induce Grammatical Knowledge from Indirect Evidence?

Add code
Oct 08, 2024
Viaarxiv icon

LLM-jp: A Cross-organizational Project for the Research and Development of Fully Open Japanese LLMs

Add code
Jul 04, 2024
Viaarxiv icon

Tree-Planted Transformers: Large Language Models with Implicit Syntactic Supervision

Add code
Feb 20, 2024
Viaarxiv icon

Emergent Word Order Universals from Cognitively-Motivated Language Models

Add code
Feb 19, 2024
Viaarxiv icon

Psychometric Predictive Power of Large Language Models

Add code
Nov 13, 2023
Viaarxiv icon

JCoLA: Japanese Corpus of Linguistic Acceptability

Add code
Sep 22, 2023
Viaarxiv icon

Composition, Attention, or Both?

Add code
Oct 24, 2022
Viaarxiv icon

Context Limitations Make Neural Language Models More Human-Like

Add code
May 23, 2022
Figure 1 for Context Limitations Make Neural Language Models More Human-Like
Figure 2 for Context Limitations Make Neural Language Models More Human-Like
Figure 3 for Context Limitations Make Neural Language Models More Human-Like
Figure 4 for Context Limitations Make Neural Language Models More Human-Like
Viaarxiv icon

Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars

Add code
Sep 10, 2021
Figure 1 for Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Figure 2 for Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Figure 3 for Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Figure 4 for Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Viaarxiv icon