Picture for Yu-Gang Jiang

Yu-Gang Jiang

CL-bench: A Benchmark for Context Learning

Add code
Feb 03, 2026
Viaarxiv icon

Just Ask: Curious Code Agents Reveal System Prompts in Frontier LLMs

Add code
Jan 29, 2026
Viaarxiv icon

FRoM-W1: Towards General Humanoid Whole-Body Control with Language Instructions

Add code
Jan 19, 2026
Viaarxiv icon

A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5

Add code
Jan 16, 2026
Viaarxiv icon

What Do LLM Agents Know About Their World? Task2Quiz: A Paradigm for Studying Environment Understanding

Add code
Jan 14, 2026
Viaarxiv icon

Thinking with Deltas: Incentivizing Reinforcement Learning via Differential Visual Reasoning Policy

Add code
Jan 11, 2026
Viaarxiv icon

UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters

Add code
Dec 24, 2025
Figure 1 for UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters
Figure 2 for UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters
Figure 3 for UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters
Figure 4 for UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters
Viaarxiv icon

Memory in the Age of AI Agents

Add code
Dec 15, 2025
Viaarxiv icon

MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation

Add code
Dec 11, 2025
Figure 1 for MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation
Figure 2 for MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation
Figure 3 for MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation
Figure 4 for MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation
Viaarxiv icon

AttackVLA: Benchmarking Adversarial and Backdoor Attacks on Vision-Language-Action Models

Add code
Nov 15, 2025
Figure 1 for AttackVLA: Benchmarking Adversarial and Backdoor Attacks on Vision-Language-Action Models
Figure 2 for AttackVLA: Benchmarking Adversarial and Backdoor Attacks on Vision-Language-Action Models
Figure 3 for AttackVLA: Benchmarking Adversarial and Backdoor Attacks on Vision-Language-Action Models
Figure 4 for AttackVLA: Benchmarking Adversarial and Backdoor Attacks on Vision-Language-Action Models
Viaarxiv icon