Picture for Mingze Ni

Mingze Ni

Deceiving Question-Answering Models: A Hybrid Word-Level Adversarial Approach

Add code
Nov 12, 2024
Viaarxiv icon

Dreaming is All You Need

Add code
Sep 03, 2024
Viaarxiv icon

Reversible Jump Attack to Textual Classifiers with Modification Reduction

Add code
Mar 21, 2024
Viaarxiv icon

AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization

Add code
Feb 20, 2024
Viaarxiv icon

Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process

Add code
Mar 01, 2023
Viaarxiv icon

Learning to Prevent Profitless Neural Code Completion

Add code
Sep 13, 2022
Figure 1 for Learning to Prevent Profitless Neural Code Completion
Figure 2 for Learning to Prevent Profitless Neural Code Completion
Figure 3 for Learning to Prevent Profitless Neural Code Completion
Figure 4 for Learning to Prevent Profitless Neural Code Completion
Viaarxiv icon

CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning

Add code
Oct 25, 2021
Figure 1 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 2 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 3 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 4 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Viaarxiv icon