Picture for Kamalika Chaudhuri

Kamalika Chaudhuri

UCSD

ExpProof : Operationalizing Explanations for Confidential Models with ZKPs

Add code
Feb 06, 2025
Figure 1 for ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Figure 2 for ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Figure 3 for ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Figure 4 for ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Viaarxiv icon

A Closer Look at the Learnability of Out-of-Distribution (OOD) Detection

Add code
Jan 15, 2025
Viaarxiv icon

Privacy-Preserving Retrieval Augmented Generation with Differential Privacy

Add code
Dec 06, 2024
Figure 1 for Privacy-Preserving Retrieval Augmented Generation with Differential Privacy
Figure 2 for Privacy-Preserving Retrieval Augmented Generation with Differential Privacy
Figure 3 for Privacy-Preserving Retrieval Augmented Generation with Differential Privacy
Figure 4 for Privacy-Preserving Retrieval Augmented Generation with Differential Privacy
Viaarxiv icon

Auditing $f$-Differential Privacy in One Run

Add code
Oct 29, 2024
Figure 1 for Auditing $f$-Differential Privacy in One Run
Figure 2 for Auditing $f$-Differential Privacy in One Run
Figure 3 for Auditing $f$-Differential Privacy in One Run
Figure 4 for Auditing $f$-Differential Privacy in One Run
Viaarxiv icon

Distribution Learning with Valid Outputs Beyond the Worst-Case

Add code
Oct 21, 2024
Viaarxiv icon

Evaluating Deep Unlearning in Large Language Models

Add code
Oct 19, 2024
Figure 1 for Evaluating Deep Unlearning in Large Language Models
Figure 2 for Evaluating Deep Unlearning in Large Language Models
Figure 3 for Evaluating Deep Unlearning in Large Language Models
Figure 4 for Evaluating Deep Unlearning in Large Language Models
Viaarxiv icon

Aligning LLMs to Be Robust Against Prompt Injection

Add code
Oct 07, 2024
Figure 1 for Aligning LLMs to Be Robust Against Prompt Injection
Figure 2 for Aligning LLMs to Be Robust Against Prompt Injection
Figure 3 for Aligning LLMs to Be Robust Against Prompt Injection
Figure 4 for Aligning LLMs to Be Robust Against Prompt Injection
Viaarxiv icon

Influence-based Attributions can be Manipulated

Add code
Sep 10, 2024
Viaarxiv icon

On Differentially Private U Statistics

Add code
Jul 06, 2024
Figure 1 for On Differentially Private U Statistics
Figure 2 for On Differentially Private U Statistics
Figure 3 for On Differentially Private U Statistics
Viaarxiv icon

Beyond Discrepancy: A Closer Look at the Theory of Distribution Shift

Add code
May 29, 2024
Figure 1 for Beyond Discrepancy: A Closer Look at the Theory of Distribution Shift
Viaarxiv icon