Picture for Hanbin Hong

Hanbin Hong

Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence

Add code
Jul 24, 2024
Figure 1 for Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence
Figure 2 for Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence
Figure 3 for Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence
Figure 4 for Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence
Viaarxiv icon

An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection

Add code
Jun 10, 2024
Viaarxiv icon

Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness

Add code
May 25, 2024
Viaarxiv icon

Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

Add code
Jul 31, 2023
Figure 1 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 2 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 3 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 4 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Viaarxiv icon

Certifiable Black-Box Attack: Ensuring Provably Successful Attack for Adversarial Examples

Add code
Apr 10, 2023
Viaarxiv icon

Certified Adversarial Robustness via Anisotropic Randomized Smoothing

Add code
Jul 12, 2022
Figure 1 for Certified Adversarial Robustness via Anisotropic Randomized Smoothing
Figure 2 for Certified Adversarial Robustness via Anisotropic Randomized Smoothing
Figure 3 for Certified Adversarial Robustness via Anisotropic Randomized Smoothing
Figure 4 for Certified Adversarial Robustness via Anisotropic Randomized Smoothing
Viaarxiv icon

UniCR: Universally Approximated Certified Robustness via Randomized Smoothing

Add code
Jul 10, 2022
Figure 1 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Figure 2 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Figure 3 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Figure 4 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Viaarxiv icon

An Eye for an Eye: Defending against Gradient-based Attacks with Gradients

Add code
Feb 02, 2022
Figure 1 for An Eye for an Eye: Defending against Gradient-based Attacks with Gradients
Figure 2 for An Eye for an Eye: Defending against Gradient-based Attacks with Gradients
Figure 3 for An Eye for an Eye: Defending against Gradient-based Attacks with Gradients
Figure 4 for An Eye for an Eye: Defending against Gradient-based Attacks with Gradients
Viaarxiv icon