Picture for Rishi Bommasani

Rishi Bommasani

The Reality of AI and Biorisk

Add code
Dec 02, 2024
Viaarxiv icon

Effective Mitigations for Systemic Risks from General-Purpose AI

Add code
Nov 14, 2024
Viaarxiv icon

Language model developers should report train-test overlap

Add code
Oct 10, 2024
Viaarxiv icon

The Foundation Model Transparency Index v1.1: May 2024

Add code
Jul 17, 2024
Viaarxiv icon

The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources

Add code
Jun 26, 2024
Viaarxiv icon

A Safe Harbor for AI Evaluation and Red Teaming

Add code
Mar 07, 2024
Figure 1 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 2 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 3 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 4 for A Safe Harbor for AI Evaluation and Red Teaming
Viaarxiv icon

On the Societal Impact of Open Foundation Models

Add code
Feb 27, 2024
Viaarxiv icon

Foundation Model Transparency Reports

Add code
Feb 26, 2024
Viaarxiv icon

The Foundation Model Transparency Index

Add code
Oct 19, 2023
Viaarxiv icon

Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes

Add code
Jul 12, 2023
Viaarxiv icon