Picture for Wenjie Qu

Wenjie Qu

Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models with Adaptive Expert Placement

Add code
Jul 05, 2024
Figure 1 for Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models with Adaptive Expert Placement
Figure 2 for Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models with Adaptive Expert Placement
Figure 3 for Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models with Adaptive Expert Placement
Figure 4 for Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models with Adaptive Expert Placement
Viaarxiv icon

A Certified Radius-Guided Attack Framework to Image Segmentation Models

Add code
Apr 05, 2023
Viaarxiv icon

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service

Add code
Jan 07, 2023
Viaarxiv icon

Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning

Add code
Dec 06, 2022
Viaarxiv icon

MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples

Add code
Oct 03, 2022
Figure 1 for MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Figure 2 for MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Figure 3 for MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Figure 4 for MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Viaarxiv icon

EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning

Add code
Aug 25, 2021
Figure 1 for EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
Figure 2 for EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
Figure 3 for EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
Figure 4 for EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
Viaarxiv icon