Picture for Richard L. Lewis

Richard L. Lewis

Combining Behaviors with the Successor Features Keyboard

Add code
Oct 24, 2023
Viaarxiv icon

In-Context Analogical Reasoning with Pre-Trained Language Models

Add code
Jun 05, 2023
Viaarxiv icon

Composing Task Knowledge with Modular Successor Feature Approximators

Add code
Jan 28, 2023
Viaarxiv icon

In-Context Policy Iteration

Add code
Oct 07, 2022
Figure 1 for In-Context Policy Iteration
Figure 2 for In-Context Policy Iteration
Figure 3 for In-Context Policy Iteration
Figure 4 for In-Context Policy Iteration
Viaarxiv icon

Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention

Add code
Apr 26, 2021
Figure 1 for Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
Figure 2 for Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
Figure 3 for Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
Figure 4 for Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
Viaarxiv icon

Reinforcement Learning of Implicit and Explicit Control Flow in Instructions

Add code
Feb 25, 2021
Figure 1 for Reinforcement Learning of Implicit and Explicit Control Flow in Instructions
Figure 2 for Reinforcement Learning of Implicit and Explicit Control Flow in Instructions
Figure 3 for Reinforcement Learning of Implicit and Explicit Control Flow in Instructions
Figure 4 for Reinforcement Learning of Implicit and Explicit Control Flow in Instructions
Viaarxiv icon

Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments

Add code
Oct 28, 2020
Figure 1 for Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments
Figure 2 for Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments
Figure 3 for Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments
Figure 4 for Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments
Viaarxiv icon

Variance-Based Rewards for Approximate Bayesian Reinforcement Learning

Add code
Mar 15, 2012
Figure 1 for Variance-Based Rewards for Approximate Bayesian Reinforcement Learning
Figure 2 for Variance-Based Rewards for Approximate Bayesian Reinforcement Learning
Figure 3 for Variance-Based Rewards for Approximate Bayesian Reinforcement Learning
Figure 4 for Variance-Based Rewards for Approximate Bayesian Reinforcement Learning
Viaarxiv icon