Picture for Sheng Wang

Sheng Wang

CropCraft: Inverse Procedural Modeling for 3D Reconstruction of Crop Plants

Add code
Nov 14, 2024
Viaarxiv icon

FM-TS: Flow Matching for Time Series Generation

Add code
Nov 12, 2024
Viaarxiv icon

Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration

Add code
Oct 22, 2024
Viaarxiv icon

ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom

Add code
Oct 18, 2024
Viaarxiv icon

MlingConf: A Comprehensive Study of Multilingual Confidence Estimation on Large Language Models

Add code
Oct 16, 2024
Viaarxiv icon

MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models

Add code
Oct 16, 2024
Viaarxiv icon

Unleashing the Power of LLMs as Multi-Modal Encoders for Text and Graph-Structured Data

Add code
Oct 15, 2024
Viaarxiv icon

QSpec: Speculative Decoding with Complementary Quantization Schemes

Add code
Oct 15, 2024
Figure 1 for QSpec: Speculative Decoding with Complementary Quantization Schemes
Figure 2 for QSpec: Speculative Decoding with Complementary Quantization Schemes
Figure 3 for QSpec: Speculative Decoding with Complementary Quantization Schemes
Figure 4 for QSpec: Speculative Decoding with Complementary Quantization Schemes
Viaarxiv icon

MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards

Add code
Oct 01, 2024
Figure 1 for MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
Figure 2 for MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
Figure 3 for MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
Figure 4 for MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
Viaarxiv icon

How Far Can Cantonese NLP Go? Benchmarking Cantonese Capabilities of Large Language Models

Add code
Aug 29, 2024
Viaarxiv icon