Picture for Jeffrey Wu

Jeffrey Wu

Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning

Add code
Oct 29, 2024
Viaarxiv icon

Scaling and evaluating sparse autoencoders

Add code
Jun 06, 2024
Figure 1 for Scaling and evaluating sparse autoencoders
Figure 2 for Scaling and evaluating sparse autoencoders
Figure 3 for Scaling and evaluating sparse autoencoders
Figure 4 for Scaling and evaluating sparse autoencoders
Viaarxiv icon

FMB: a Functional Manipulation Benchmark for Generalizable Robotic Learning

Add code
Jan 16, 2024
Viaarxiv icon

Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning

Add code
Oct 18, 2023
Figure 1 for Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning
Figure 2 for Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning
Figure 3 for Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning
Figure 4 for Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning
Viaarxiv icon

Language Models are Few-Shot Learners

Add code
Jun 05, 2020
Figure 1 for Language Models are Few-Shot Learners
Figure 2 for Language Models are Few-Shot Learners
Figure 3 for Language Models are Few-Shot Learners
Figure 4 for Language Models are Few-Shot Learners
Viaarxiv icon

Scaling Laws for Neural Language Models

Add code
Jan 23, 2020
Figure 1 for Scaling Laws for Neural Language Models
Figure 2 for Scaling Laws for Neural Language Models
Figure 3 for Scaling Laws for Neural Language Models
Figure 4 for Scaling Laws for Neural Language Models
Viaarxiv icon

Fine-Tuning Language Models from Human Preferences

Add code
Sep 18, 2019
Figure 1 for Fine-Tuning Language Models from Human Preferences
Figure 2 for Fine-Tuning Language Models from Human Preferences
Figure 3 for Fine-Tuning Language Models from Human Preferences
Figure 4 for Fine-Tuning Language Models from Human Preferences
Viaarxiv icon