Picture for Minhui Xue

Minhui Xue

AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems

Add code
Nov 09, 2024
Figure 1 for AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems
Figure 2 for AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems
Figure 3 for AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems
Figure 4 for AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems
Viaarxiv icon

Reconstruction of Differentially Private Text Sanitization via Large Language Models

Add code
Oct 16, 2024
Figure 1 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Figure 2 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Figure 3 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Figure 4 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Viaarxiv icon

Edge Unlearning is Not "on Edge"! An Adaptive Exact Unlearning System on Resource-Constrained Devices

Add code
Oct 15, 2024
Viaarxiv icon

Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems

Add code
Jul 11, 2024
Figure 1 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Figure 2 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Figure 3 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Figure 4 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Viaarxiv icon

QUEEN: Query Unlearning against Model Extraction

Add code
Jul 01, 2024
Figure 1 for QUEEN: Query Unlearning against Model Extraction
Figure 2 for QUEEN: Query Unlearning against Model Extraction
Figure 3 for QUEEN: Query Unlearning against Model Extraction
Figure 4 for QUEEN: Query Unlearning against Model Extraction
Viaarxiv icon

On Security Weaknesses and Vulnerabilities in Deep Learning Systems

Add code
Jun 12, 2024
Figure 1 for On Security Weaknesses and Vulnerabilities in Deep Learning Systems
Figure 2 for On Security Weaknesses and Vulnerabilities in Deep Learning Systems
Figure 3 for On Security Weaknesses and Vulnerabilities in Deep Learning Systems
Figure 4 for On Security Weaknesses and Vulnerabilities in Deep Learning Systems
Viaarxiv icon

Provably Unlearnable Examples

Add code
May 06, 2024
Figure 1 for Provably Unlearnable Examples
Figure 2 for Provably Unlearnable Examples
Figure 3 for Provably Unlearnable Examples
Figure 4 for Provably Unlearnable Examples
Viaarxiv icon

LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model

Add code
Mar 27, 2024
Figure 1 for LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model
Figure 2 for LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model
Figure 3 for LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model
Figure 4 for LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model
Viaarxiv icon

Efficient Constrained $k$-Center Clustering with Background Knowledge

Add code
Jan 23, 2024
Viaarxiv icon

MFABA: A More Faithful and Accelerated Boundary-based Attribution Method for Deep Neural Networks

Add code
Dec 21, 2023
Viaarxiv icon