Picture for Jiahai Feng

Jiahai Feng

Extractive Structures Learned in Pretraining Enable Generalization on Finetuned Facts

Add code
Dec 05, 2024
Viaarxiv icon

Monitoring Latent World States in Language Models with Propositional Probes

Add code
Jun 27, 2024
Viaarxiv icon

Learning adaptive planning representations with natural language guidance

Add code
Dec 13, 2023
Viaarxiv icon

How do Language Models Bind Entities in Context?

Add code
Oct 26, 2023
Viaarxiv icon

Structured, flexible, and robust: benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks

Add code
May 11, 2022
Figure 1 for Structured, flexible, and robust: benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks
Figure 2 for Structured, flexible, and robust: benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks
Figure 3 for Structured, flexible, and robust: benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks
Figure 4 for Structured, flexible, and robust: benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks
Viaarxiv icon

AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity

Add code
Jun 18, 2020
Figure 1 for AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity
Figure 2 for AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity
Figure 3 for AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity
Figure 4 for AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity
Viaarxiv icon