Picture for Baoyuan Wu

Baoyuan Wu

ESpeW: Robust Copyright Protection for LLM-based EaaS via Embedding-Specific Watermark

Add code
Oct 24, 2024
Figure 1 for ESpeW: Robust Copyright Protection for LLM-based EaaS via Embedding-Specific Watermark
Figure 2 for ESpeW: Robust Copyright Protection for LLM-based EaaS via Embedding-Specific Watermark
Figure 3 for ESpeW: Robust Copyright Protection for LLM-based EaaS via Embedding-Specific Watermark
Figure 4 for ESpeW: Robust Copyright Protection for LLM-based EaaS via Embedding-Specific Watermark
Viaarxiv icon

$\textit{X}^2$-DFD: A framework for e${X}$plainable and e${X}$tendable Deepfake Detection

Add code
Oct 08, 2024
Figure 1 for $\textit{X}^2$-DFD: A framework for e${X}$plainable and e${X}$tendable Deepfake Detection
Figure 2 for $\textit{X}^2$-DFD: A framework for e${X}$plainable and e${X}$tendable Deepfake Detection
Figure 3 for $\textit{X}^2$-DFD: A framework for e${X}$plainable and e${X}$tendable Deepfake Detection
Figure 4 for $\textit{X}^2$-DFD: A framework for e${X}$plainable and e${X}$tendable Deepfake Detection
Viaarxiv icon

C2P-CLIP: Injecting Category Common Prompt in CLIP to Enhance Generalization in Deepfake Detection

Add code
Aug 19, 2024
Figure 1 for C2P-CLIP: Injecting Category Common Prompt in CLIP to Enhance Generalization in Deepfake Detection
Figure 2 for C2P-CLIP: Injecting Category Common Prompt in CLIP to Enhance Generalization in Deepfake Detection
Figure 3 for C2P-CLIP: Injecting Category Common Prompt in CLIP to Enhance Generalization in Deepfake Detection
Figure 4 for C2P-CLIP: Injecting Category Common Prompt in CLIP to Enhance Generalization in Deepfake Detection
Viaarxiv icon

RiskAwareBench: Towards Evaluating Physical Risk Awareness for High-level Planning of LLM-based Embodied Agents

Add code
Aug 08, 2024
Figure 1 for RiskAwareBench: Towards Evaluating Physical Risk Awareness for High-level Planning of LLM-based Embodied Agents
Figure 2 for RiskAwareBench: Towards Evaluating Physical Risk Awareness for High-level Planning of LLM-based Embodied Agents
Figure 3 for RiskAwareBench: Towards Evaluating Physical Risk Awareness for High-level Planning of LLM-based Embodied Agents
Figure 4 for RiskAwareBench: Towards Evaluating Physical Risk Awareness for High-level Planning of LLM-based Embodied Agents
Viaarxiv icon

Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack

Add code
May 30, 2024
Viaarxiv icon

Decentralized Directed Collaboration for Personalized Federated Learning

Add code
May 28, 2024
Viaarxiv icon

Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor

Add code
May 25, 2024
Viaarxiv icon

Fragile Model Watermark for integrity protection: leveraging boundary volatility and sensitive sample-pairing

Add code
Apr 11, 2024
Figure 1 for Fragile Model Watermark for integrity protection: leveraging boundary volatility and sensitive sample-pairing
Figure 2 for Fragile Model Watermark for integrity protection: leveraging boundary volatility and sensitive sample-pairing
Figure 3 for Fragile Model Watermark for integrity protection: leveraging boundary volatility and sensitive sample-pairing
Figure 4 for Fragile Model Watermark for integrity protection: leveraging boundary volatility and sensitive sample-pairing
Viaarxiv icon

Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics

Add code
Mar 26, 2024
Figure 1 for Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics
Figure 2 for Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics
Figure 3 for Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics
Figure 4 for Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics
Viaarxiv icon

Data-Independent Operator: A Training-Free Artifact Representation Extractor for Generalizable Deepfake Detection

Add code
Mar 11, 2024
Viaarxiv icon