Picture for Zhanxing Zhu

Zhanxing Zhu

Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization

Add code
Jun 20, 2024
Viaarxiv icon

Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability

Add code
Apr 01, 2023
Viaarxiv icon

MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations

Add code
Feb 03, 2023
Figure 1 for MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations
Figure 2 for MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations
Figure 3 for MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations
Figure 4 for MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations
Viaarxiv icon

Fine-grained differentiable physics: a yarn-level model for fabrics

Add code
Feb 01, 2022
Figure 1 for Fine-grained differentiable physics: a yarn-level model for fabrics
Figure 2 for Fine-grained differentiable physics: a yarn-level model for fabrics
Figure 3 for Fine-grained differentiable physics: a yarn-level model for fabrics
Figure 4 for Fine-grained differentiable physics: a yarn-level model for fabrics
Viaarxiv icon

Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI

Add code
Jul 26, 2021
Viaarxiv icon

Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization

Add code
Mar 31, 2021
Figure 1 for Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization
Figure 2 for Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization
Figure 3 for Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization
Figure 4 for Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization
Viaarxiv icon

Amata: An Annealing Mechanism for Adversarial Training Acceleration

Add code
Dec 15, 2020
Figure 1 for Amata: An Annealing Mechanism for Adversarial Training Acceleration
Figure 2 for Amata: An Annealing Mechanism for Adversarial Training Acceleration
Figure 3 for Amata: An Annealing Mechanism for Adversarial Training Acceleration
Figure 4 for Amata: An Annealing Mechanism for Adversarial Training Acceleration
Viaarxiv icon

Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher

Add code
Oct 20, 2020
Figure 1 for Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Figure 2 for Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Figure 3 for Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Figure 4 for Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Viaarxiv icon

Neural Approximate Sufficient Statistics for Implicit Models

Add code
Oct 20, 2020
Figure 1 for Neural Approximate Sufficient Statistics for Implicit Models
Figure 2 for Neural Approximate Sufficient Statistics for Implicit Models
Figure 3 for Neural Approximate Sufficient Statistics for Implicit Models
Figure 4 for Neural Approximate Sufficient Statistics for Implicit Models
Viaarxiv icon

Automatic Data Augmentation for 3D Medical Image Segmentation

Add code
Oct 07, 2020
Figure 1 for Automatic Data Augmentation for 3D Medical Image Segmentation
Figure 2 for Automatic Data Augmentation for 3D Medical Image Segmentation
Figure 3 for Automatic Data Augmentation for 3D Medical Image Segmentation
Figure 4 for Automatic Data Augmentation for 3D Medical Image Segmentation
Viaarxiv icon