Picture for Dawei Yang

Dawei Yang

NVR: Vector Runahead on NPUs for Sparse Memory Access

Add code
Feb 19, 2025
Viaarxiv icon

GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning

Add code
Feb 18, 2025
Viaarxiv icon

Is an Ultra Large Natural Image-Based Foundation Model Superior to a Retina-Specific Model for Detecting Ocular and Systemic Diseases?

Add code
Feb 10, 2025
Viaarxiv icon

GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting

Add code
Dec 19, 2024
Figure 1 for GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting
Figure 2 for GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting
Figure 3 for GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting
Figure 4 for GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting
Viaarxiv icon

Panoptic-FlashOcc: An Efficient Baseline to Marry Semantic Occupancy with Panoptic via Instance Center

Add code
Jun 15, 2024
Viaarxiv icon

M&M VTO: Multi-Garment Virtual Try-On and Editing

Add code
Jun 06, 2024
Figure 1 for M&M VTO: Multi-Garment Virtual Try-On and Editing
Figure 2 for M&M VTO: Multi-Garment Virtual Try-On and Editing
Figure 3 for M&M VTO: Multi-Garment Virtual Try-On and Editing
Figure 4 for M&M VTO: Multi-Garment Virtual Try-On and Editing
Viaarxiv icon

PillarHist: A Quantization-aware Pillar Feature Encoder based on Height-aware Histogram

Add code
May 29, 2024
Figure 1 for PillarHist: A Quantization-aware Pillar Feature Encoder based on Height-aware Histogram
Figure 2 for PillarHist: A Quantization-aware Pillar Feature Encoder based on Height-aware Histogram
Figure 3 for PillarHist: A Quantization-aware Pillar Feature Encoder based on Height-aware Histogram
Figure 4 for PillarHist: A Quantization-aware Pillar Feature Encoder based on Height-aware Histogram
Viaarxiv icon

I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models

Add code
May 28, 2024
Figure 1 for I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models
Figure 2 for I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models
Figure 3 for I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models
Figure 4 for I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models
Viaarxiv icon

Post-Training Quantization for Re-parameterization via Coarse & Fine Weight Splitting

Add code
Dec 17, 2023
Viaarxiv icon

FlashOcc: Fast and Memory-Efficient Occupancy Prediction via Channel-to-Height Plugin

Add code
Nov 18, 2023
Figure 1 for FlashOcc: Fast and Memory-Efficient Occupancy Prediction via Channel-to-Height Plugin
Figure 2 for FlashOcc: Fast and Memory-Efficient Occupancy Prediction via Channel-to-Height Plugin
Figure 3 for FlashOcc: Fast and Memory-Efficient Occupancy Prediction via Channel-to-Height Plugin
Figure 4 for FlashOcc: Fast and Memory-Efficient Occupancy Prediction via Channel-to-Height Plugin
Viaarxiv icon