Picture for Gong Zhang

Gong Zhang

Efficient Failure Management for Multi-Agent Systems with Reasoning Trace Representation

Add code
Mar 23, 2026
Viaarxiv icon

RuntimeSlicer: Towards Generalizable Unified Runtime State Representation for Failure Management

Add code
Mar 23, 2026
Viaarxiv icon

M2F: Automated Formalization of Mathematical Literature at Scale

Add code
Feb 19, 2026
Viaarxiv icon

HyLRA: Hybrid Layer Reuse Attention for Efficient Long-Context Inference

Add code
Jan 31, 2026
Viaarxiv icon

A Mathematical Theory of Top-$k$ Sparse Attention via Total Variation Distance

Add code
Dec 08, 2025
Figure 1 for A Mathematical Theory of Top-$k$ Sparse Attention via Total Variation Distance
Figure 2 for A Mathematical Theory of Top-$k$ Sparse Attention via Total Variation Distance
Figure 3 for A Mathematical Theory of Top-$k$ Sparse Attention via Total Variation Distance
Figure 4 for A Mathematical Theory of Top-$k$ Sparse Attention via Total Variation Distance
Viaarxiv icon

T2I-Copilot: A Training-Free Multi-Agent Text-to-Image System for Enhanced Prompt Interpretation and Interactive Generation

Add code
Jul 28, 2025
Figure 1 for T2I-Copilot: A Training-Free Multi-Agent Text-to-Image System for Enhanced Prompt Interpretation and Interactive Generation
Figure 2 for T2I-Copilot: A Training-Free Multi-Agent Text-to-Image System for Enhanced Prompt Interpretation and Interactive Generation
Figure 3 for T2I-Copilot: A Training-Free Multi-Agent Text-to-Image System for Enhanced Prompt Interpretation and Interactive Generation
Figure 4 for T2I-Copilot: A Training-Free Multi-Agent Text-to-Image System for Enhanced Prompt Interpretation and Interactive Generation
Viaarxiv icon

Efficient Long-Context LLM Inference via KV Cache Clustering

Add code
Jun 13, 2025
Figure 1 for Efficient Long-Context LLM Inference via KV Cache Clustering
Figure 2 for Efficient Long-Context LLM Inference via KV Cache Clustering
Figure 3 for Efficient Long-Context LLM Inference via KV Cache Clustering
Figure 4 for Efficient Long-Context LLM Inference via KV Cache Clustering
Viaarxiv icon

AutoSchemaKG: Autonomous Knowledge Graph Construction through Dynamic Schema Induction from Web-Scale Corpora

Add code
May 29, 2025
Figure 1 for AutoSchemaKG: Autonomous Knowledge Graph Construction through Dynamic Schema Induction from Web-Scale Corpora
Figure 2 for AutoSchemaKG: Autonomous Knowledge Graph Construction through Dynamic Schema Induction from Web-Scale Corpora
Figure 3 for AutoSchemaKG: Autonomous Knowledge Graph Construction through Dynamic Schema Induction from Web-Scale Corpora
Figure 4 for AutoSchemaKG: Autonomous Knowledge Graph Construction through Dynamic Schema Induction from Web-Scale Corpora
Viaarxiv icon

XL3M: A Training-free Framework for LLM Length Extension Based on Segment-wise Inference

Add code
May 28, 2024
Figure 1 for XL3M: A Training-free Framework for LLM Length Extension Based on Segment-wise Inference
Figure 2 for XL3M: A Training-free Framework for LLM Length Extension Based on Segment-wise Inference
Figure 3 for XL3M: A Training-free Framework for LLM Length Extension Based on Segment-wise Inference
Figure 4 for XL3M: A Training-free Framework for LLM Length Extension Based on Segment-wise Inference
Viaarxiv icon

Super-Resolution Harmonic Retrieval of Non-Circular Signals

Add code
Jan 17, 2023
Figure 1 for Super-Resolution Harmonic Retrieval of Non-Circular Signals
Figure 2 for Super-Resolution Harmonic Retrieval of Non-Circular Signals
Figure 3 for Super-Resolution Harmonic Retrieval of Non-Circular Signals
Viaarxiv icon