Picture for Yihong Xu

Yihong Xu

Annealed Winner-Takes-All for Motion Forecasting

Add code
Sep 18, 2024
Figure 1 for Annealed Winner-Takes-All for Motion Forecasting
Figure 2 for Annealed Winner-Takes-All for Motion Forecasting
Figure 3 for Annealed Winner-Takes-All for Motion Forecasting
Figure 4 for Annealed Winner-Takes-All for Motion Forecasting
Viaarxiv icon

Lost and Found: Overcoming Detector Failures in Online Multi-Object Tracking

Add code
Jul 16, 2024
Viaarxiv icon

Valeo4Cast: A Modular Approach to End-to-End Forecasting

Add code
Jun 12, 2024
Figure 1 for Valeo4Cast: A Modular Approach to End-to-End Forecasting
Figure 2 for Valeo4Cast: A Modular Approach to End-to-End Forecasting
Figure 3 for Valeo4Cast: A Modular Approach to End-to-End Forecasting
Figure 4 for Valeo4Cast: A Modular Approach to End-to-End Forecasting
Viaarxiv icon

Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression

Add code
Jul 12, 2023
Figure 1 for Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression
Figure 2 for Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression
Figure 3 for Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression
Figure 4 for Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression
Viaarxiv icon

Challenges of Using Real-World Sensory Inputs for Motion Forecasting in Autonomous Driving

Add code
Jun 15, 2023
Viaarxiv icon

Learning-based Spatial and Angular Information Separation for Light Field Compression

Add code
Apr 13, 2023
Figure 1 for Learning-based Spatial and Angular Information Separation for Light Field Compression
Figure 2 for Learning-based Spatial and Angular Information Separation for Light Field Compression
Figure 3 for Learning-based Spatial and Angular Information Separation for Light Field Compression
Figure 4 for Learning-based Spatial and Angular Information Separation for Light Field Compression
Viaarxiv icon

DNN Training Acceleration via Exploring GPGPU Friendly Sparsity

Add code
Mar 11, 2022
Figure 1 for DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Figure 2 for DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Figure 3 for DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Figure 4 for DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Viaarxiv icon

CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction

Add code
Mar 09, 2022
Figure 1 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Figure 2 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Figure 3 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Figure 4 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Viaarxiv icon

TransCenter: Transformers with Dense Queries for Multiple-Object Tracking

Add code
Mar 28, 2021
Figure 1 for TransCenter: Transformers with Dense Queries for Multiple-Object Tracking
Figure 2 for TransCenter: Transformers with Dense Queries for Multiple-Object Tracking
Figure 3 for TransCenter: Transformers with Dense Queries for Multiple-Object Tracking
Figure 4 for TransCenter: Transformers with Dense Queries for Multiple-Object Tracking
Viaarxiv icon

DeepMOT: A Differentiable Framework for Training Multiple Object Trackers

Add code
Jun 15, 2019
Figure 1 for DeepMOT: A Differentiable Framework for Training Multiple Object Trackers
Figure 2 for DeepMOT: A Differentiable Framework for Training Multiple Object Trackers
Figure 3 for DeepMOT: A Differentiable Framework for Training Multiple Object Trackers
Figure 4 for DeepMOT: A Differentiable Framework for Training Multiple Object Trackers
Viaarxiv icon