Picture for Zhengyu Zhang

Zhengyu Zhang

SOLARIS: Speculative Offloading of Latent-bAsed Representation for Inference Scaling

Add code
Apr 13, 2026
Viaarxiv icon

Channel Measurements and Modeling based on Composite Environmental Factor for Urban Street-Canyon Intersections

Add code
Apr 02, 2026
Viaarxiv icon

Deep Learning Based Site-Specific Channel Inference Using Satellite Images

Add code
Mar 30, 2026
Viaarxiv icon

Emergent Dexterity via Diverse Resets and Large-Scale Reinforcement Learning

Add code
Mar 16, 2026
Viaarxiv icon

A Deployment-Friendly Foundational Framework for Efficient Computational Pathology

Add code
Feb 15, 2026
Viaarxiv icon

Artificial Intelligence Empowered Channel Prediction: A New Paradigm for Propagation Channel Modeling

Add code
Jan 14, 2026
Viaarxiv icon

MambaMIL+: Modeling Long-Term Contextual Patterns for Gigapixel Whole Slide Image

Add code
Dec 19, 2025
Figure 1 for MambaMIL+: Modeling Long-Term Contextual Patterns for Gigapixel Whole Slide Image
Figure 2 for MambaMIL+: Modeling Long-Term Contextual Patterns for Gigapixel Whole Slide Image
Figure 3 for MambaMIL+: Modeling Long-Term Contextual Patterns for Gigapixel Whole Slide Image
Figure 4 for MambaMIL+: Modeling Long-Term Contextual Patterns for Gigapixel Whole Slide Image
Viaarxiv icon

Meta Lattice: Model Space Redesign for Cost-Effective Industry-Scale Ads Recommendations

Add code
Dec 15, 2025
Viaarxiv icon

A Geometry Map-Based Site-Specific Propagation Channel Model for Urban Scenarios

Add code
Nov 19, 2025
Viaarxiv icon

ExplainableGuard: Interpretable Adversarial Defense for Large Language Models Using Chain-of-Thought Reasoning

Add code
Nov 15, 2025
Figure 1 for ExplainableGuard: Interpretable Adversarial Defense for Large Language Models Using Chain-of-Thought Reasoning
Figure 2 for ExplainableGuard: Interpretable Adversarial Defense for Large Language Models Using Chain-of-Thought Reasoning
Figure 3 for ExplainableGuard: Interpretable Adversarial Defense for Large Language Models Using Chain-of-Thought Reasoning
Figure 4 for ExplainableGuard: Interpretable Adversarial Defense for Large Language Models Using Chain-of-Thought Reasoning
Viaarxiv icon