Picture for Jiaming Liu

Jiaming Liu

Unpaired Image-to-Image Translation via a Self-Supervised Semantic Bridge

Add code
Feb 18, 2026
Viaarxiv icon

In-Hospital Stroke Prediction from PPG-Derived Hemodynamic Features

Add code
Feb 10, 2026
Viaarxiv icon

Addressing data annotation scarcity in Brain Tumor Segmentation on 3D MRI scan Using a Semi-Supervised Teacher-Student Framework

Add code
Feb 09, 2026
Viaarxiv icon

TwinRL-VLA: Digital Twin-Driven Reinforcement Learning for Real-World Robotic Manipulation

Add code
Feb 09, 2026
Viaarxiv icon

RoboMIND 2.0: A Multimodal, Bimanual Mobile Manipulation Dataset for Generalizable Embodied Intelligence

Add code
Dec 31, 2025
Viaarxiv icon

Loom: Diffusion-Transformer for Interleaved Generation

Add code
Dec 20, 2025
Figure 1 for Loom: Diffusion-Transformer for Interleaved Generation
Figure 2 for Loom: Diffusion-Transformer for Interleaved Generation
Figure 3 for Loom: Diffusion-Transformer for Interleaved Generation
Figure 4 for Loom: Diffusion-Transformer for Interleaved Generation
Viaarxiv icon

GRACE: Designing Generative Face Video Codec via Agile Hardware-Centric Workflow

Add code
Nov 12, 2025
Figure 1 for GRACE: Designing Generative Face Video Codec via Agile Hardware-Centric Workflow
Figure 2 for GRACE: Designing Generative Face Video Codec via Agile Hardware-Centric Workflow
Figure 3 for GRACE: Designing Generative Face Video Codec via Agile Hardware-Centric Workflow
Figure 4 for GRACE: Designing Generative Face Video Codec via Agile Hardware-Centric Workflow
Viaarxiv icon

Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model

Add code
Oct 21, 2025
Figure 1 for Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
Figure 2 for Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
Figure 3 for Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
Figure 4 for Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
Viaarxiv icon

MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation

Add code
Sep 30, 2025
Figure 1 for MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation
Figure 2 for MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation
Figure 3 for MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation
Figure 4 for MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation
Viaarxiv icon

WoW: Towards a World omniscient World model Through Embodied Interaction

Add code
Sep 26, 2025
Viaarxiv icon