Picture for Nouha Dziri

Nouha Dziri

2 OLMo 2 Furious

Add code
Dec 31, 2024
Figure 1 for 2 OLMo 2 Furious
Figure 2 for 2 OLMo 2 Furious
Figure 3 for 2 OLMo 2 Furious
Figure 4 for 2 OLMo 2 Furious
Viaarxiv icon

Multi-Attribute Constraint Satisfaction via Language Model Rewriting

Add code
Dec 26, 2024
Viaarxiv icon

TÜLU 3: Pushing Frontiers in Open Language Model Post-Training

Add code
Nov 22, 2024
Figure 1 for TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Figure 2 for TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Figure 3 for TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Figure 4 for TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Viaarxiv icon

SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation

Add code
Oct 22, 2024
Figure 1 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Figure 2 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Figure 3 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Figure 4 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Viaarxiv icon

To Err is AI : A Case Study Informing LLM Flaw Reporting Practices

Add code
Oct 15, 2024
Figure 1 for To Err is AI : A Case Study Informing LLM Flaw Reporting Practices
Figure 2 for To Err is AI : A Case Study Informing LLM Flaw Reporting Practices
Figure 3 for To Err is AI : A Case Study Informing LLM Flaw Reporting Practices
Figure 4 for To Err is AI : A Case Study Informing LLM Flaw Reporting Practices
Viaarxiv icon

Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction

Add code
Oct 10, 2024
Figure 1 for Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction
Figure 2 for Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction
Figure 3 for Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction
Figure 4 for Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction
Viaarxiv icon

AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text

Add code
Oct 05, 2024
Viaarxiv icon

Rel-A.I.: An Interaction-Centered Approach To Measuring Human-LM Reliance

Add code
Jul 10, 2024
Viaarxiv icon

WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs

Add code
Jun 26, 2024
Figure 1 for WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs
Figure 2 for WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs
Figure 3 for WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs
Figure 4 for WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs
Viaarxiv icon

WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models

Add code
Jun 26, 2024
Figure 1 for WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
Figure 2 for WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
Figure 3 for WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
Figure 4 for WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
Viaarxiv icon