Picture for Dingfan Chen

Dingfan Chen

Inside the Black Box: Detecting Data Leakage in Pre-trained Language Encoders

Add code
Aug 20, 2024
Viaarxiv icon

PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics

Add code
Apr 06, 2024
Viaarxiv icon

Towards Biologically Plausible and Private Gene Expression Data Generation

Add code
Feb 07, 2024
Viaarxiv icon

A Unified View of Differentially Private Deep Generative Modeling

Add code
Sep 27, 2023
Viaarxiv icon

MargCTGAN: A "Marginally'' Better CTGAN for the Low Sample Regime

Add code
Jul 16, 2023
Viaarxiv icon

Data Forensics in Diffusion Models: A Systematic Analysis of Membership Privacy

Add code
Feb 15, 2023
Viaarxiv icon

Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy

Add code
Feb 02, 2023
Viaarxiv icon

Private Set Generation with Discriminative Information

Add code
Nov 07, 2022
Viaarxiv icon

RelaxLoss: Defending Membership Inference Attacks without Losing Utility

Add code
Jul 12, 2022
Figure 1 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Figure 2 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Figure 3 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Figure 4 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Viaarxiv icon

Responsible Disclosure of Generative Models Using Scalable Fingerprinting

Add code
Dec 16, 2020
Figure 1 for Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Figure 2 for Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Figure 3 for Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Figure 4 for Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Viaarxiv icon