Picture for Zhan Qin

Zhan Qin

Mitigating Privacy Risks in LLM Embeddings from Embedding Inversion

Add code
Nov 06, 2024
Viaarxiv icon

PointNCBW: Towards Dataset Ownership Verification for Point Clouds via Negative Clean-label Backdoor Watermark

Add code
Aug 10, 2024
Viaarxiv icon

Defending Jailbreak Attack in VLMs via Cross-modality Information Detector

Add code
Aug 01, 2024
Viaarxiv icon

TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs

Add code
Jul 12, 2024
Figure 1 for TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs
Figure 2 for TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs
Figure 3 for TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs
Figure 4 for TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs
Viaarxiv icon

Releasing Malevolence from Benevolence: The Menace of Benign Data on Machine Unlearning

Add code
Jul 06, 2024
Viaarxiv icon

Prompt-Consistency Image Generation (PCIG): A Unified Framework Integrating LLMs, Knowledge Graphs, and Controllable Diffusion Models

Add code
Jun 24, 2024
Viaarxiv icon

A Survey on Medical Large Language Models: Technology, Application, Trustworthiness, and Future Directions

Add code
Jun 06, 2024
Viaarxiv icon

Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution

Add code
May 08, 2024
Viaarxiv icon

A Causal Explainable Guardrails for Large Language Models

Add code
May 07, 2024
Figure 1 for A Causal Explainable Guardrails for Large Language Models
Figure 2 for A Causal Explainable Guardrails for Large Language Models
Figure 3 for A Causal Explainable Guardrails for Large Language Models
Figure 4 for A Causal Explainable Guardrails for Large Language Models
Viaarxiv icon

Going Proactive and Explanatory Against Malware Concept Drift

Add code
May 07, 2024
Viaarxiv icon