Picture for Carolyn Ashurst

Carolyn Ashurst

Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey

Add code
Sep 27, 2023
Viaarxiv icon

Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness

Add code
Feb 23, 2022
Figure 1 for Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness
Figure 2 for Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness
Figure 3 for Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness
Figure 4 for Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness
Viaarxiv icon

AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader Impact Statements

Add code
Nov 02, 2021
Figure 1 for AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader Impact Statements
Figure 2 for AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader Impact Statements
Figure 3 for AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader Impact Statements
Figure 4 for AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader Impact Statements
Viaarxiv icon

RAFT: A Real-World Few-Shot Text Classification Benchmark

Add code
Sep 28, 2021
Figure 1 for RAFT: A Real-World Few-Shot Text Classification Benchmark
Figure 2 for RAFT: A Real-World Few-Shot Text Classification Benchmark
Figure 3 for RAFT: A Real-World Few-Shot Text Classification Benchmark
Figure 4 for RAFT: A Real-World Few-Shot Text Classification Benchmark
Viaarxiv icon

Institutionalising Ethics in AI through Broader Impact Requirements

Add code
May 30, 2021
Figure 1 for Institutionalising Ethics in AI through Broader Impact Requirements
Figure 2 for Institutionalising Ethics in AI through Broader Impact Requirements
Viaarxiv icon