Picture for Helong Zhou

Helong Zhou

VAD: Vectorized Scene Representation for Efficient Autonomous Driving

Add code
Mar 21, 2023
Viaarxiv icon

Perceive, Interact, Predict: Learning Dynamic and Static Clues for End-to-End Motion Prediction

Add code
Dec 05, 2022
Viaarxiv icon

MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition

Add code
Aug 11, 2022
Figure 1 for MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition
Figure 2 for MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition
Figure 3 for MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition
Figure 4 for MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition
Viaarxiv icon

Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition

Add code
Jul 23, 2022
Figure 1 for Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Figure 2 for Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Figure 3 for Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Figure 4 for Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Viaarxiv icon

HOPE: Hierarchical Spatial-temporal Network for Occupancy Flow Prediction

Add code
Jun 21, 2022
Figure 1 for HOPE: Hierarchical Spatial-temporal Network for Occupancy Flow Prediction
Figure 2 for HOPE: Hierarchical Spatial-temporal Network for Occupancy Flow Prediction
Figure 3 for HOPE: Hierarchical Spatial-temporal Network for Occupancy Flow Prediction
Figure 4 for HOPE: Hierarchical Spatial-temporal Network for Occupancy Flow Prediction
Viaarxiv icon

Cross-Image Relational Knowledge Distillation for Semantic Segmentation

Add code
Apr 14, 2022
Figure 1 for Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Figure 2 for Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Figure 3 for Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Figure 4 for Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Viaarxiv icon

Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition

Add code
Mar 26, 2022
Figure 1 for Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition
Figure 2 for Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition
Figure 3 for Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition
Figure 4 for Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition
Viaarxiv icon

Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective

Add code
Feb 01, 2021
Figure 1 for Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective
Figure 2 for Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective
Figure 3 for Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective
Figure 4 for Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective
Viaarxiv icon

VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing

Add code
Jul 12, 2019
Figure 1 for VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing
Figure 2 for VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing
Figure 3 for VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing
Figure 4 for VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing
Viaarxiv icon