Picture for Lu Lin

Lu Lin

Training a Label-Noise-Resistant GNN with Reduced Complexity

Add code
Nov 17, 2024
Viaarxiv icon

AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion models

Add code
Oct 28, 2024
Viaarxiv icon

Mitigating Graph Covariate Shift via Score-based Out-of-distribution Augmentation

Add code
Oct 23, 2024
Viaarxiv icon

Adversarially Robust Industrial Anomaly Detection Through Diffusion Model

Add code
Aug 09, 2024
Viaarxiv icon

Graph Adversarial Diffusion Convolution

Add code
Jun 04, 2024
Viaarxiv icon

XPrompt:Explaining Large Language Model's Generation via Joint Prompt Attribution

Add code
May 30, 2024
Viaarxiv icon

Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization

Add code
May 28, 2024
Figure 1 for Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization
Figure 2 for Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization
Figure 3 for Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization
Figure 4 for Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization
Viaarxiv icon

WordGame: Efficient & Effective LLM Jailbreak via Simultaneous Obfuscation in Query and Response

Add code
May 22, 2024
Viaarxiv icon

On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?

Add code
Oct 02, 2023
Figure 1 for On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?
Figure 2 for On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?
Figure 3 for On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?
Figure 4 for On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?
Viaarxiv icon

Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM

Add code
Sep 18, 2023
Viaarxiv icon