Picture for Faeze Brahman

Faeze Brahman

RESTOR: Knowledge Recovery through Machine Unlearning

Add code
Oct 31, 2024
Viaarxiv icon

Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback

Add code
Oct 24, 2024
Figure 1 for Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback
Figure 2 for Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback
Figure 3 for Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback
Figure 4 for Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback
Viaarxiv icon

HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions

Add code
Sep 26, 2024
Viaarxiv icon

AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents

Add code
Sep 13, 2024
Viaarxiv icon

Trust or Escalate: LLM Judges with Provable Guarantees for Human Agreement

Add code
Jul 25, 2024
Viaarxiv icon

How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models

Add code
Jun 29, 2024
Viaarxiv icon

WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models

Add code
Jun 26, 2024
Figure 1 for WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
Figure 2 for WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
Figure 3 for WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
Figure 4 for WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
Viaarxiv icon

WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild

Add code
Jun 07, 2024
Viaarxiv icon

Information-Theoretic Distillation for Reference-less Summarization

Add code
Mar 20, 2024
Viaarxiv icon

MacGyver: Are Large Language Models Creative Problem Solvers?

Add code
Nov 16, 2023
Viaarxiv icon