Picture for Mingze Ni

Mingze Ni

Cross-Entropy Attacks to Language Models via Rare Event Simulation

Add code
Jan 21, 2025
Viaarxiv icon

Deceiving Question-Answering Models: A Hybrid Word-Level Adversarial Approach

Add code
Nov 12, 2024
Figure 1 for Deceiving Question-Answering Models: A Hybrid Word-Level Adversarial Approach
Figure 2 for Deceiving Question-Answering Models: A Hybrid Word-Level Adversarial Approach
Figure 3 for Deceiving Question-Answering Models: A Hybrid Word-Level Adversarial Approach
Figure 4 for Deceiving Question-Answering Models: A Hybrid Word-Level Adversarial Approach
Viaarxiv icon

Dreaming is All You Need

Add code
Sep 03, 2024
Figure 1 for Dreaming is All You Need
Figure 2 for Dreaming is All You Need
Figure 3 for Dreaming is All You Need
Figure 4 for Dreaming is All You Need
Viaarxiv icon

Reversible Jump Attack to Textual Classifiers with Modification Reduction

Add code
Mar 21, 2024
Figure 1 for Reversible Jump Attack to Textual Classifiers with Modification Reduction
Figure 2 for Reversible Jump Attack to Textual Classifiers with Modification Reduction
Figure 3 for Reversible Jump Attack to Textual Classifiers with Modification Reduction
Figure 4 for Reversible Jump Attack to Textual Classifiers with Modification Reduction
Viaarxiv icon

AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization

Add code
Feb 20, 2024
Viaarxiv icon

Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process

Add code
Mar 01, 2023
Viaarxiv icon

Learning to Prevent Profitless Neural Code Completion

Add code
Sep 13, 2022
Figure 1 for Learning to Prevent Profitless Neural Code Completion
Figure 2 for Learning to Prevent Profitless Neural Code Completion
Figure 3 for Learning to Prevent Profitless Neural Code Completion
Figure 4 for Learning to Prevent Profitless Neural Code Completion
Viaarxiv icon

CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning

Add code
Oct 25, 2021
Figure 1 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 2 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 3 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 4 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Viaarxiv icon