Picture for Stuart Russell

Stuart Russell

Berkeley

Extractive Structures Learned in Pretraining Enable Generalization on Finetuned Facts

Add code
Dec 05, 2024
Viaarxiv icon

Will an AI with Private Information Allow Itself to Be Switched Off?

Add code
Nov 25, 2024
Figure 1 for Will an AI with Private Information Allow Itself to Be Switched Off?
Figure 2 for Will an AI with Private Information Allow Itself to Be Switched Off?
Figure 3 for Will an AI with Private Information Allow Itself to Be Switched Off?
Figure 4 for Will an AI with Private Information Allow Itself to Be Switched Off?
Viaarxiv icon

RL, but don't do anything I wouldn't do

Add code
Oct 08, 2024
Viaarxiv icon

BAMDP Shaping: a Unified Theoretical Framework for Intrinsic Motivation and Reward Shaping

Add code
Sep 09, 2024
Viaarxiv icon

Monitoring Latent World States in Language Models with Propositional Probes

Add code
Jun 27, 2024
Viaarxiv icon

Evidence of Learned Look-Ahead in a Chess-Playing Neural Network

Add code
Jun 02, 2024
Figure 1 for Evidence of Learned Look-Ahead in a Chess-Playing Neural Network
Figure 2 for Evidence of Learned Look-Ahead in a Chess-Playing Neural Network
Figure 3 for Evidence of Learned Look-Ahead in a Chess-Playing Neural Network
Figure 4 for Evidence of Learned Look-Ahead in a Chess-Playing Neural Network
Viaarxiv icon

Diffusion On Syntax Trees For Program Synthesis

Add code
May 30, 2024
Viaarxiv icon

AI Alignment with Changing and Influenceable Reward Functions

Add code
May 28, 2024
Viaarxiv icon

Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems

Add code
May 10, 2024
Figure 1 for Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
Figure 2 for Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
Figure 3 for Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
Figure 4 for Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
Viaarxiv icon

Social Choice for AI Alignment: Dealing with Diverse Human Feedback

Add code
Apr 16, 2024
Figure 1 for Social Choice for AI Alignment: Dealing with Diverse Human Feedback
Figure 2 for Social Choice for AI Alignment: Dealing with Diverse Human Feedback
Figure 3 for Social Choice for AI Alignment: Dealing with Diverse Human Feedback
Figure 4 for Social Choice for AI Alignment: Dealing with Diverse Human Feedback
Viaarxiv icon