Picture for Nicolas Flammarion

Nicolas Flammarion

LIENS, SIERRA

Long-Context Linear System Identification

Add code
Oct 08, 2024
Figure 1 for Long-Context Linear System Identification
Figure 2 for Long-Context Linear System Identification
Figure 3 for Long-Context Linear System Identification
Figure 4 for Long-Context Linear System Identification
Viaarxiv icon

Simplicity bias and optimization threshold in two-layer ReLU networks

Add code
Oct 03, 2024
Figure 1 for Simplicity bias and optimization threshold in two-layer ReLU networks
Figure 2 for Simplicity bias and optimization threshold in two-layer ReLU networks
Figure 3 for Simplicity bias and optimization threshold in two-layer ReLU networks
Viaarxiv icon

Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants

Add code
Aug 07, 2024
Viaarxiv icon

Does Refusal Training in LLMs Generalize to the Past Tense?

Add code
Jul 16, 2024
Viaarxiv icon

Implicit Bias of Mirror Flow on Separable Data

Add code
Jun 18, 2024
Figure 1 for Implicit Bias of Mirror Flow on Separable Data
Figure 2 for Implicit Bias of Mirror Flow on Separable Data
Figure 3 for Implicit Bias of Mirror Flow on Separable Data
Viaarxiv icon

Is In-Context Learning Sufficient for Instruction Following in LLMs?

Add code
May 30, 2024
Figure 1 for Is In-Context Learning Sufficient for Instruction Following in LLMs?
Figure 2 for Is In-Context Learning Sufficient for Instruction Following in LLMs?
Figure 3 for Is In-Context Learning Sufficient for Instruction Following in LLMs?
Figure 4 for Is In-Context Learning Sufficient for Instruction Following in LLMs?
Viaarxiv icon

Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs

Add code
Apr 22, 2024
Viaarxiv icon

Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks

Add code
Apr 02, 2024
Viaarxiv icon

JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models

Add code
Mar 28, 2024
Figure 1 for JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models
Figure 2 for JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models
Figure 3 for JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models
Figure 4 for JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models
Viaarxiv icon

Leveraging Continuous Time to Understand Momentum When Training Diagonal Linear Networks

Add code
Mar 08, 2024
Viaarxiv icon