Picture for David Evans

David Evans

The Mismeasure of Man and Models: Evaluating Allocational Harms in Large Language Models

Add code
Aug 02, 2024
Viaarxiv icon

The OPS-SAT benchmark for detecting anomalies in satellite telemetry

Add code
Jun 29, 2024
Viaarxiv icon

Do Parameters Reveal More than Loss for Membership Inference?

Add code
Jun 17, 2024
Viaarxiv icon

Addressing Both Statistical and Causal Gender Fairness in NLP Models

Add code
Mar 30, 2024
Viaarxiv icon

Do Membership Inference Attacks Work on Large Language Models?

Add code
Feb 12, 2024
Viaarxiv icon

Understanding Variation in Subpopulation Susceptibility to Poisoning Attacks

Add code
Nov 20, 2023
Viaarxiv icon

SoK: Pitfalls in Evaluating Black-Box Attacks

Add code
Oct 26, 2023
Viaarxiv icon

SoK: Memorization in General-Purpose Large Language Models

Add code
Oct 24, 2023
Viaarxiv icon

When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?

Add code
Jul 03, 2023
Figure 1 for When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
Figure 2 for When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
Figure 3 for When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
Figure 4 for When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
Viaarxiv icon

Manipulating Transfer Learning for Property Inference

Add code
Mar 21, 2023
Viaarxiv icon