Picture for Samuel J. Gershman

Samuel J. Gershman

Do Mice Grok? Glimpses of Hidden Progress During Overtraining in Sensory Cortex

Add code
Nov 05, 2024
Figure 1 for Do Mice Grok? Glimpses of Hidden Progress During Overtraining in Sensory Cortex
Figure 2 for Do Mice Grok? Glimpses of Hidden Progress During Overtraining in Sensory Cortex
Figure 3 for Do Mice Grok? Glimpses of Hidden Progress During Overtraining in Sensory Cortex
Figure 4 for Do Mice Grok? Glimpses of Hidden Progress During Overtraining in Sensory Cortex
Viaarxiv icon

Artificial intelligence for science: The easy and hard problems

Add code
Aug 24, 2024
Figure 1 for Artificial intelligence for science: The easy and hard problems
Figure 2 for Artificial intelligence for science: The easy and hard problems
Figure 3 for Artificial intelligence for science: The easy and hard problems
Figure 4 for Artificial intelligence for science: The easy and hard problems
Viaarxiv icon

Predictive representations: building blocks of intelligence

Add code
Feb 09, 2024
Viaarxiv icon

Toward a More Biologically Plausible Neural Network Model of Latent Cause Inference

Add code
Dec 13, 2023
Viaarxiv icon

How should the advent of large language models affect the practice of science?

Add code
Dec 05, 2023
Viaarxiv icon

Grokking as the Transition from Lazy to Rich Training Dynamics

Add code
Oct 09, 2023
Figure 1 for Grokking as the Transition from Lazy to Rich Training Dynamics
Figure 2 for Grokking as the Transition from Lazy to Rich Training Dynamics
Figure 3 for Grokking as the Transition from Lazy to Rich Training Dynamics
Figure 4 for Grokking as the Transition from Lazy to Rich Training Dynamics
Viaarxiv icon

Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration, and Planning

Add code
Jul 27, 2021
Figure 1 for Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration, and Planning
Figure 2 for Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration, and Planning
Figure 3 for Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration, and Planning
Figure 4 for Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration, and Planning
Viaarxiv icon

Hybrid Memoised Wake-Sleep: Approximate Inference at the Discrete-Continuous Interface

Add code
Jul 04, 2021
Figure 1 for Hybrid Memoised Wake-Sleep: Approximate Inference at the Discrete-Continuous Interface
Figure 2 for Hybrid Memoised Wake-Sleep: Approximate Inference at the Discrete-Continuous Interface
Figure 3 for Hybrid Memoised Wake-Sleep: Approximate Inference at the Discrete-Continuous Interface
Figure 4 for Hybrid Memoised Wake-Sleep: Approximate Inference at the Discrete-Continuous Interface
Viaarxiv icon

Language-Mediated, Object-Centric Representation Learning

Add code
Dec 31, 2020
Figure 1 for Language-Mediated, Object-Centric Representation Learning
Figure 2 for Language-Mediated, Object-Centric Representation Learning
Figure 3 for Language-Mediated, Object-Centric Representation Learning
Figure 4 for Language-Mediated, Object-Centric Representation Learning
Viaarxiv icon

Analyzing machine-learned representations: A natural language case study

Add code
Sep 12, 2019
Figure 1 for Analyzing machine-learned representations: A natural language case study
Figure 2 for Analyzing machine-learned representations: A natural language case study
Figure 3 for Analyzing machine-learned representations: A natural language case study
Figure 4 for Analyzing machine-learned representations: A natural language case study
Viaarxiv icon