Picture for Yinzhi Cao

Yinzhi Cao

Data Lineage Inference: Uncovering Privacy Vulnerabilities of Dataset Pruning

Add code
Nov 24, 2024
Figure 1 for Data Lineage Inference: Uncovering Privacy Vulnerabilities of Dataset Pruning
Figure 2 for Data Lineage Inference: Uncovering Privacy Vulnerabilities of Dataset Pruning
Figure 3 for Data Lineage Inference: Uncovering Privacy Vulnerabilities of Dataset Pruning
Figure 4 for Data Lineage Inference: Uncovering Privacy Vulnerabilities of Dataset Pruning
Viaarxiv icon

Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning

Add code
Nov 04, 2024
Viaarxiv icon

RIPPLECOT: Amplifying Ripple Effect of Knowledge Editing in Language Models via Chain-of-Thought In-Context Learning

Add code
Oct 04, 2024
Figure 1 for RIPPLECOT: Amplifying Ripple Effect of Knowledge Editing in Language Models via Chain-of-Thought In-Context Learning
Figure 2 for RIPPLECOT: Amplifying Ripple Effect of Knowledge Editing in Language Models via Chain-of-Thought In-Context Learning
Figure 3 for RIPPLECOT: Amplifying Ripple Effect of Knowledge Editing in Language Models via Chain-of-Thought In-Context Learning
Figure 4 for RIPPLECOT: Amplifying Ripple Effect of Knowledge Editing in Language Models via Chain-of-Thought In-Context Learning
Viaarxiv icon

Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models

Add code
Jul 14, 2024
Viaarxiv icon

PLeak: Prompt Leaking Attacks against Large Language Model Applications

Add code
May 14, 2024
Figure 1 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 2 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 3 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 4 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Viaarxiv icon

TrustLLM: Trustworthiness in Large Language Models

Add code
Jan 25, 2024
Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

SneakyPrompt: Jailbreaking Text-to-image Generative Models

Add code
May 20, 2023
Viaarxiv icon

Addressing Heterogeneity in Federated Learning via Distributional Transformation

Add code
Oct 26, 2022
Viaarxiv icon

EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation

Add code
Feb 28, 2022
Figure 1 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Figure 2 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Figure 3 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Figure 4 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Viaarxiv icon

Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods

Add code
Mar 04, 2021
Figure 1 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Figure 2 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Figure 3 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Figure 4 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Viaarxiv icon