Picture for Futa Waseda

Futa Waseda

MergePrint: Robust Fingerprinting against Merging Large Language Models

Add code
Oct 11, 2024
Figure 1 for MergePrint: Robust Fingerprinting against Merging Large Language Models
Figure 2 for MergePrint: Robust Fingerprinting against Merging Large Language Models
Figure 3 for MergePrint: Robust Fingerprinting against Merging Large Language Models
Figure 4 for MergePrint: Robust Fingerprinting against Merging Large Language Models
Viaarxiv icon

Leveraging Many-To-Many Relationships for Defending Against Visual-Language Adversarial Attacks

Add code
May 29, 2024
Viaarxiv icon

Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off

Add code
Feb 22, 2024
Viaarxiv icon

Defending Against Physical Adversarial Patch Attacks on Infrared Human Detection

Add code
Sep 27, 2023
Figure 1 for Defending Against Physical Adversarial Patch Attacks on Infrared Human Detection
Figure 2 for Defending Against Physical Adversarial Patch Attacks on Infrared Human Detection
Figure 3 for Defending Against Physical Adversarial Patch Attacks on Infrared Human Detection
Figure 4 for Defending Against Physical Adversarial Patch Attacks on Infrared Human Detection
Viaarxiv icon

Beyond In-Domain Scenarios: Robust Density-Aware Calibration

Add code
Feb 10, 2023
Viaarxiv icon

Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently

Add code
Dec 29, 2021
Figure 1 for Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently
Figure 2 for Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently
Figure 3 for Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently
Figure 4 for Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently
Viaarxiv icon