Picture for Hongwei Yao

Hongwei Yao

FIT-Print: Towards False-claim-resistant Model Ownership Verification via Targeted Fingerprint

Add code
Jan 26, 2025
Viaarxiv icon

Mitigating Privacy Risks in LLM Embeddings from Embedding Inversion

Add code
Nov 06, 2024
Viaarxiv icon

TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs

Add code
Jul 12, 2024
Figure 1 for TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs
Figure 2 for TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs
Figure 3 for TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs
Figure 4 for TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs
Viaarxiv icon

Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution

Add code
May 08, 2024
Viaarxiv icon

PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models

Add code
Oct 19, 2023
Figure 1 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 2 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 3 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 4 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Viaarxiv icon

RemovalNet: DNN Fingerprint Removal Attacks

Add code
Aug 31, 2023
Viaarxiv icon

FDINet: Protecting against DNN Model Extraction via Feature Distortion Index

Add code
Jun 22, 2023
Viaarxiv icon