Picture for Guangke Chen

Guangke Chen

LaserGuider: A Laser Based Physical Backdoor Attack against Deep Neural Networks

Add code
Dec 05, 2024
Viaarxiv icon

A Proactive and Dual Prevention Mechanism against Illegal Song Covers empowered by Singing Voice Conversion

Add code
Jan 30, 2024
Viaarxiv icon

SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker Recognition Systems

Add code
Sep 14, 2023
Viaarxiv icon

QFA2SR: Query-Free Adversarial Transfer Attacks to Speaker Recognition Systems

Add code
May 23, 2023
Viaarxiv icon

Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition

Add code
Jun 07, 2022
Figure 1 for Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition
Figure 2 for Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition
Figure 3 for Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition
Figure 4 for Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition
Viaarxiv icon

AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems

Add code
Jun 07, 2022
Figure 1 for AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems
Figure 2 for AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems
Figure 3 for AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems
Figure 4 for AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems
Viaarxiv icon

SEC4SR: A Security Analysis Platform for Speaker Recognition

Add code
Sep 04, 2021
Figure 1 for SEC4SR: A Security Analysis Platform for Speaker Recognition
Figure 2 for SEC4SR: A Security Analysis Platform for Speaker Recognition
Figure 3 for SEC4SR: A Security Analysis Platform for Speaker Recognition
Figure 4 for SEC4SR: A Security Analysis Platform for Speaker Recognition
Viaarxiv icon

Attack as Defense: Characterizing Adversarial Examples using Robustness

Add code
Mar 13, 2021
Figure 1 for Attack as Defense: Characterizing Adversarial Examples using Robustness
Figure 2 for Attack as Defense: Characterizing Adversarial Examples using Robustness
Figure 3 for Attack as Defense: Characterizing Adversarial Examples using Robustness
Figure 4 for Attack as Defense: Characterizing Adversarial Examples using Robustness
Viaarxiv icon

BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks

Add code
Mar 12, 2021
Figure 1 for BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks
Figure 2 for BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks
Figure 3 for BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks
Figure 4 for BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks
Viaarxiv icon

Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems

Add code
Nov 03, 2019
Figure 1 for Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Figure 2 for Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Figure 3 for Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Figure 4 for Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Viaarxiv icon