Picture for Wei Tao

Wei Tao

Triage: Hierarchical Visual Budgeting for Efficient Video Reasoning in Vision-Language Models

Add code
Jan 30, 2026
Viaarxiv icon

Advances and Frontiers of LLM-based Issue Resolution in Software Engineering: A Comprehensive Survey

Add code
Jan 15, 2026
Viaarxiv icon

Optimizing the Adversarial Perturbation with a Momentum-based Adaptive Matrix

Add code
Dec 16, 2025
Viaarxiv icon

FastBEV++: Fast by Algorithm, Deployable by Design

Add code
Dec 09, 2025
Viaarxiv icon

SWE-Factory: Your Automated Factory for Issue Resolution Training Data and Evaluation Benchmarks

Add code
Jun 12, 2025
Viaarxiv icon

MoQAE: Mixed-Precision Quantization for Long-Context LLM Inference via Mixture of Quantization-Aware Experts

Add code
Jun 09, 2025
Viaarxiv icon

MADLLM: Multivariate Anomaly Detection via Pre-trained LLMs

Add code
Apr 13, 2025
Figure 1 for MADLLM: Multivariate Anomaly Detection via Pre-trained LLMs
Figure 2 for MADLLM: Multivariate Anomaly Detection via Pre-trained LLMs
Figure 3 for MADLLM: Multivariate Anomaly Detection via Pre-trained LLMs
Figure 4 for MADLLM: Multivariate Anomaly Detection via Pre-trained LLMs
Viaarxiv icon

Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context LLM Inference

Add code
Mar 30, 2025
Figure 1 for Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context LLM Inference
Figure 2 for Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context LLM Inference
Figure 3 for Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context LLM Inference
Figure 4 for Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context LLM Inference
Viaarxiv icon

Dedicated Inference Engine and Binary-Weight Neural Networks for Lightweight Instance Segmentation

Add code
Jan 03, 2025
Figure 1 for Dedicated Inference Engine and Binary-Weight Neural Networks for Lightweight Instance Segmentation
Figure 2 for Dedicated Inference Engine and Binary-Weight Neural Networks for Lightweight Instance Segmentation
Figure 3 for Dedicated Inference Engine and Binary-Weight Neural Networks for Lightweight Instance Segmentation
Figure 4 for Dedicated Inference Engine and Binary-Weight Neural Networks for Lightweight Instance Segmentation
Viaarxiv icon

UNet--: Memory-Efficient and Feature-Enhanced Network Architecture based on U-Net with Reduced Skip-Connections

Add code
Dec 24, 2024
Viaarxiv icon