Picture for Jue Wang

Jue Wang

Zach

Maritime Communication in Evaporation Duct Environment with Ship Trajectory Optimization

Add code
Oct 08, 2025
Viaarxiv icon

Staircase Streaming for Low-Latency Multi-Agent Inference

Add code
Oct 06, 2025
Viaarxiv icon

MOOSE-Chem3: Toward Experiment-Guided Hypothesis Ranking via Simulated Experimental Feedback

Add code
May 23, 2025
Viaarxiv icon

FloE: On-the-Fly MoE Inference on Memory-constrained GPU

Add code
May 12, 2025
Viaarxiv icon

FloE: On-the-Fly MoE Inference

Add code
May 09, 2025
Viaarxiv icon

Improving Model Alignment Through Collective Intelligence of Open-Source LLMS

Add code
May 05, 2025
Figure 1 for Improving Model Alignment Through Collective Intelligence of Open-Source LLMS
Figure 2 for Improving Model Alignment Through Collective Intelligence of Open-Source LLMS
Figure 3 for Improving Model Alignment Through Collective Intelligence of Open-Source LLMS
Figure 4 for Improving Model Alignment Through Collective Intelligence of Open-Source LLMS
Viaarxiv icon

CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning

Add code
Apr 24, 2025
Figure 1 for CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning
Figure 2 for CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning
Figure 3 for CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning
Figure 4 for CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning
Viaarxiv icon

HMI: Hierarchical Knowledge Management for Efficient Multi-Tenant Inference in Pretrained Language Models

Add code
Apr 24, 2025
Viaarxiv icon

Think Deep, Think Fast: Investigating Efficiency of Verifier-free Inference-time-scaling Methods

Add code
Apr 18, 2025
Viaarxiv icon

Scaling Instruction-Tuned LLMs to Million-Token Contexts via Hierarchical Synthetic Data Generation

Add code
Apr 17, 2025
Viaarxiv icon