Picture for Junzhi Yu

Junzhi Yu

VLingNav: Embodied Navigation with Adaptive Reasoning and Visual-Assisted Linguistic Memory

Add code
Jan 13, 2026
Viaarxiv icon

ReSPIRe: Informative and Reusable Belief Tree Search for Robot Probabilistic Search and Tracking in Unknown Environments

Add code
Dec 31, 2025
Viaarxiv icon

Detect Anything via Next Point Prediction

Add code
Oct 14, 2025
Viaarxiv icon

LET-US: Long Event-Text Understanding of Scenes

Add code
Aug 10, 2025
Viaarxiv icon

A Novel ViDAR Device With Visual Inertial Encoder Odometry and Reinforcement Learning-Based Active SLAM Method

Add code
Jun 16, 2025
Viaarxiv icon

Rex-Thinker: Grounded Object Referring via Chain-of-Thought Reasoning

Add code
Jun 04, 2025
Viaarxiv icon

TrackVLA: Embodied Visual Tracking in the Wild

Add code
May 29, 2025
Figure 1 for TrackVLA: Embodied Visual Tracking in the Wild
Figure 2 for TrackVLA: Embodied Visual Tracking in the Wild
Figure 3 for TrackVLA: Embodied Visual Tracking in the Wild
Figure 4 for TrackVLA: Embodied Visual Tracking in the Wild
Viaarxiv icon

A Novel Underwater Vehicle With Orientation Adjustable Thrusters: Design and Adaptive Tracking Control

Add code
Mar 25, 2025
Viaarxiv icon

HandOS: 3D Hand Reconstruction in One Stage

Add code
Dec 02, 2024
Figure 1 for HandOS: 3D Hand Reconstruction in One Stage
Figure 2 for HandOS: 3D Hand Reconstruction in One Stage
Figure 3 for HandOS: 3D Hand Reconstruction in One Stage
Figure 4 for HandOS: 3D Hand Reconstruction in One Stage
Viaarxiv icon

Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning

Add code
Oct 02, 2024
Figure 1 for Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning
Figure 2 for Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning
Figure 3 for Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning
Figure 4 for Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning
Viaarxiv icon