Picture for Lu Tian

Lu Tian

MSWA: Refining Local Attention with Multi-ScaleWindow Attention

Add code
Jan 02, 2025
Viaarxiv icon

EGSRAL: An Enhanced 3D Gaussian Splatting based Renderer with Automated Labeling for Large-Scale Driving Scene

Add code
Dec 20, 2024
Viaarxiv icon

FTP: A Fine-grained Token-wise Pruner for Large Language Models via Token Routing

Add code
Dec 16, 2024
Viaarxiv icon

Fast Occupancy Network

Add code
Dec 10, 2024
Viaarxiv icon

DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization

Add code
Oct 22, 2024
Figure 1 for DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization
Figure 2 for DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization
Figure 3 for DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization
Figure 4 for DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization
Viaarxiv icon

Diagram Formalization Enhanced Multi-Modal Geometry Problem Solver

Add code
Sep 09, 2024
Figure 1 for Diagram Formalization Enhanced Multi-Modal Geometry Problem Solver
Figure 2 for Diagram Formalization Enhanced Multi-Modal Geometry Problem Solver
Figure 3 for Diagram Formalization Enhanced Multi-Modal Geometry Problem Solver
Figure 4 for Diagram Formalization Enhanced Multi-Modal Geometry Problem Solver
Viaarxiv icon

Enhancing One-shot Pruned Pre-trained Language Models through Sparse-Dense-Sparse Mechanism

Add code
Aug 20, 2024
Figure 1 for Enhancing One-shot Pruned Pre-trained Language Models through Sparse-Dense-Sparse Mechanism
Figure 2 for Enhancing One-shot Pruned Pre-trained Language Models through Sparse-Dense-Sparse Mechanism
Figure 3 for Enhancing One-shot Pruned Pre-trained Language Models through Sparse-Dense-Sparse Mechanism
Figure 4 for Enhancing One-shot Pruned Pre-trained Language Models through Sparse-Dense-Sparse Mechanism
Viaarxiv icon

Towards Scale-Aware Full Surround Monodepth with Transformers

Add code
Jul 15, 2024
Figure 1 for Towards Scale-Aware Full Surround Monodepth with Transformers
Figure 2 for Towards Scale-Aware Full Surround Monodepth with Transformers
Figure 3 for Towards Scale-Aware Full Surround Monodepth with Transformers
Figure 4 for Towards Scale-Aware Full Surround Monodepth with Transformers
Viaarxiv icon

VIPS-Odom: Visual-Inertial Odometry Tightly-coupled with Parking Slots for Autonomous Parking

Add code
Jul 06, 2024
Viaarxiv icon

Amphista: Accelerate LLM Inference with Bi-directional Multiple Drafting Heads in a Non-autoregressive Style

Add code
Jun 19, 2024
Viaarxiv icon