Picture for Hongyu Wang

Hongyu Wang

Bitnet.cpp: Efficient Edge Inference for Ternary LLMs

Add code
Feb 17, 2025
Viaarxiv icon

DenseSplat: Densifying Gaussian Splatting SLAM with Neural Radiance Prior

Add code
Feb 13, 2025
Viaarxiv icon

GARAD-SLAM: 3D GAussian splatting for Real-time Anti Dynamic SLAM

Add code
Feb 05, 2025
Viaarxiv icon

Robotic Programmer: Video Instructed Policy Code Generation for Robotic Manipulation

Add code
Jan 08, 2025
Viaarxiv icon

MTS-UNMixers: Multivariate Time Series Forecasting via Channel-Time Dual Unmixing

Add code
Nov 26, 2024
Figure 1 for MTS-UNMixers: Multivariate Time Series Forecasting via Channel-Time Dual Unmixing
Figure 2 for MTS-UNMixers: Multivariate Time Series Forecasting via Channel-Time Dual Unmixing
Figure 3 for MTS-UNMixers: Multivariate Time Series Forecasting via Channel-Time Dual Unmixing
Figure 4 for MTS-UNMixers: Multivariate Time Series Forecasting via Channel-Time Dual Unmixing
Viaarxiv icon

BitNet a4.8: 4-bit Activations for 1-bit LLMs

Add code
Nov 07, 2024
Figure 1 for BitNet a4.8: 4-bit Activations for 1-bit LLMs
Figure 2 for BitNet a4.8: 4-bit Activations for 1-bit LLMs
Figure 3 for BitNet a4.8: 4-bit Activations for 1-bit LLMs
Figure 4 for BitNet a4.8: 4-bit Activations for 1-bit LLMs
Viaarxiv icon

1-bit AI Infra: Part 1.1, Fast and Lossless BitNet b1.58 Inference on CPUs

Add code
Oct 21, 2024
Viaarxiv icon

Cross Fusion RGB-T Tracking with Bi-directional Adapter

Add code
Aug 30, 2024
Viaarxiv icon

Q-Sparse: All Large Language Models can be Fully Sparsely-Activated

Add code
Jul 15, 2024
Figure 1 for Q-Sparse: All Large Language Models can be Fully Sparsely-Activated
Figure 2 for Q-Sparse: All Large Language Models can be Fully Sparsely-Activated
Figure 3 for Q-Sparse: All Large Language Models can be Fully Sparsely-Activated
Figure 4 for Q-Sparse: All Large Language Models can be Fully Sparsely-Activated
Viaarxiv icon

HTD-Mamba: Efficient Hyperspectral Target Detection with Pyramid State Space Model

Add code
Jul 09, 2024
Viaarxiv icon