Picture for Liya Su

Liya Su

Can LLMs Deeply Detect Complex Malicious Queries? A Framework for Jailbreaking via Obfuscating Intent

Add code
May 07, 2024
Figure 1 for Can LLMs Deeply Detect Complex Malicious Queries? A Framework for Jailbreaking via Obfuscating Intent
Figure 2 for Can LLMs Deeply Detect Complex Malicious Queries? A Framework for Jailbreaking via Obfuscating Intent
Figure 3 for Can LLMs Deeply Detect Complex Malicious Queries? A Framework for Jailbreaking via Obfuscating Intent
Figure 4 for Can LLMs Deeply Detect Complex Malicious Queries? A Framework for Jailbreaking via Obfuscating Intent
Viaarxiv icon

The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks

Add code
Oct 24, 2023
Figure 1 for The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks
Figure 2 for The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks
Figure 3 for The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks
Figure 4 for The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks
Viaarxiv icon