Picture for Chunpeng Wu

Chunpeng Wu

Cycle Self-Training for Semi-Supervised Object Detection with Distribution Consistency Reweighting

Add code
Jul 12, 2022
Figure 1 for Cycle Self-Training for Semi-Supervised Object Detection with Distribution Consistency Reweighting
Figure 2 for Cycle Self-Training for Semi-Supervised Object Detection with Distribution Consistency Reweighting
Figure 3 for Cycle Self-Training for Semi-Supervised Object Detection with Distribution Consistency Reweighting
Figure 4 for Cycle Self-Training for Semi-Supervised Object Detection with Distribution Consistency Reweighting
Viaarxiv icon

MVStylizer: An Efficient Edge-Assisted Video Photorealistic Style Transfer System for Mobile Phones

Add code
Jun 01, 2020
Figure 1 for MVStylizer: An Efficient Edge-Assisted Video Photorealistic Style Transfer System for Mobile Phones
Figure 2 for MVStylizer: An Efficient Edge-Assisted Video Photorealistic Style Transfer System for Mobile Phones
Figure 3 for MVStylizer: An Efficient Edge-Assisted Video Photorealistic Style Transfer System for Mobile Phones
Figure 4 for MVStylizer: An Efficient Edge-Assisted Video Photorealistic Style Transfer System for Mobile Phones
Viaarxiv icon

Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness

Add code
Feb 17, 2020
Figure 1 for Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness
Figure 2 for Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness
Figure 3 for Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness
Viaarxiv icon

Conditional Transferring Features: Scaling GANs to Thousands of Classes with 30% Less High-quality Data for Training

Add code
Sep 25, 2019
Figure 1 for Conditional Transferring Features: Scaling GANs to Thousands of Classes with 30% Less High-quality Data for Training
Figure 2 for Conditional Transferring Features: Scaling GANs to Thousands of Classes with 30% Less High-quality Data for Training
Figure 3 for Conditional Transferring Features: Scaling GANs to Thousands of Classes with 30% Less High-quality Data for Training
Figure 4 for Conditional Transferring Features: Scaling GANs to Thousands of Classes with 30% Less High-quality Data for Training
Viaarxiv icon

Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack

Add code
Dec 06, 2018
Figure 1 for Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack
Figure 2 for Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack
Figure 3 for Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack
Figure 4 for Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack
Viaarxiv icon

SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning

Add code
Sep 01, 2018
Figure 1 for SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning
Figure 2 for SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning
Figure 3 for SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning
Figure 4 for SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning
Viaarxiv icon

MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

Add code
May 11, 2018
Figure 1 for MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
Figure 2 for MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
Figure 3 for MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
Figure 4 for MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
Viaarxiv icon

TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning

Add code
Dec 29, 2017
Figure 1 for TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
Figure 2 for TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
Figure 3 for TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
Figure 4 for TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
Viaarxiv icon

Coordinating Filters for Faster Deep Neural Networks

Add code
Jul 25, 2017
Figure 1 for Coordinating Filters for Faster Deep Neural Networks
Figure 2 for Coordinating Filters for Faster Deep Neural Networks
Figure 3 for Coordinating Filters for Faster Deep Neural Networks
Figure 4 for Coordinating Filters for Faster Deep Neural Networks
Viaarxiv icon

A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation

Add code
Apr 03, 2017
Figure 1 for A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation
Figure 2 for A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation
Figure 3 for A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation
Figure 4 for A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation
Viaarxiv icon