Picture for Shaofeng Li

Shaofeng Li

Model Inversion in Split Learning for Personalized LLMs: New Insights from Information Bottleneck Theory

Add code
Jan 10, 2025
Viaarxiv icon

FD2-Net: Frequency-Driven Feature Decomposition Network for Infrared-Visible Object Detection

Add code
Dec 12, 2024
Figure 1 for FD2-Net: Frequency-Driven Feature Decomposition Network for Infrared-Visible Object Detection
Figure 2 for FD2-Net: Frequency-Driven Feature Decomposition Network for Infrared-Visible Object Detection
Figure 3 for FD2-Net: Frequency-Driven Feature Decomposition Network for Infrared-Visible Object Detection
Figure 4 for FD2-Net: Frequency-Driven Feature Decomposition Network for Infrared-Visible Object Detection
Viaarxiv icon

Show Me What and Where has Changed? Question Answering and Grounding for Remote Sensing Change Detection

Add code
Oct 31, 2024
Figure 1 for Show Me What and Where has Changed? Question Answering and Grounding for Remote Sensing Change Detection
Figure 2 for Show Me What and Where has Changed? Question Answering and Grounding for Remote Sensing Change Detection
Figure 3 for Show Me What and Where has Changed? Question Answering and Grounding for Remote Sensing Change Detection
Figure 4 for Show Me What and Where has Changed? Question Answering and Grounding for Remote Sensing Change Detection
Viaarxiv icon

Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in Multimodal Large Language Model Security

Add code
Apr 08, 2024
Viaarxiv icon

Seeing is not always believing: The Space of Harmless Perturbations

Add code
Feb 03, 2024
Viaarxiv icon

Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations

Add code
Feb 22, 2022
Figure 1 for Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
Figure 2 for Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
Figure 3 for Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
Figure 4 for Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
Viaarxiv icon

Exposing Weaknesses of Malware Detectors with Explainability-Guided Evasion Attacks

Add code
Nov 19, 2021
Figure 1 for Exposing Weaknesses of Malware Detectors with Explainability-Guided Evasion Attacks
Figure 2 for Exposing Weaknesses of Malware Detectors with Explainability-Guided Evasion Attacks
Figure 3 for Exposing Weaknesses of Malware Detectors with Explainability-Guided Evasion Attacks
Figure 4 for Exposing Weaknesses of Malware Detectors with Explainability-Guided Evasion Attacks
Viaarxiv icon

Hidden Backdoors in Human-Centric Language Models

Add code
May 01, 2021
Figure 1 for Hidden Backdoors in Human-Centric Language Models
Figure 2 for Hidden Backdoors in Human-Centric Language Models
Figure 3 for Hidden Backdoors in Human-Centric Language Models
Figure 4 for Hidden Backdoors in Human-Centric Language Models
Viaarxiv icon

Deep Learning Backdoors

Add code
Jul 16, 2020
Figure 1 for Deep Learning Backdoors
Figure 2 for Deep Learning Backdoors
Figure 3 for Deep Learning Backdoors
Figure 4 for Deep Learning Backdoors
Viaarxiv icon

Invisible Backdoor Attacks Against Deep Neural Networks

Add code
Sep 06, 2019
Figure 1 for Invisible Backdoor Attacks Against Deep Neural Networks
Figure 2 for Invisible Backdoor Attacks Against Deep Neural Networks
Figure 3 for Invisible Backdoor Attacks Against Deep Neural Networks
Figure 4 for Invisible Backdoor Attacks Against Deep Neural Networks
Viaarxiv icon