Picture for Siliang Tang

Siliang Tang

Meta-Reflection: A Feedback-Free Reflection Learning Framework

Add code
Dec 18, 2024
Viaarxiv icon

Iris: Breaking GUI Complexity with Adaptive Focus and Self-Refining

Add code
Dec 13, 2024
Figure 1 for Iris: Breaking GUI Complexity with Adaptive Focus and Self-Refining
Figure 2 for Iris: Breaking GUI Complexity with Adaptive Focus and Self-Refining
Figure 3 for Iris: Breaking GUI Complexity with Adaptive Focus and Self-Refining
Figure 4 for Iris: Breaking GUI Complexity with Adaptive Focus and Self-Refining
Viaarxiv icon

Mastering Collaborative Multi-modal Data Selection: A Focus on Informativeness, Uniqueness, and Representativeness

Add code
Dec 09, 2024
Viaarxiv icon

STEP: Enhancing Video-LLMs' Compositional Reasoning by Spatio-Temporal Graph-guided Self-Training

Add code
Nov 29, 2024
Viaarxiv icon

AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea

Add code
Nov 24, 2024
Figure 1 for AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea
Figure 2 for AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea
Figure 3 for AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea
Figure 4 for AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea
Viaarxiv icon

Unified Generative and Discriminative Training for Multi-modal Large Language Models

Add code
Nov 01, 2024
Figure 1 for Unified Generative and Discriminative Training for Multi-modal Large Language Models
Figure 2 for Unified Generative and Discriminative Training for Multi-modal Large Language Models
Figure 3 for Unified Generative and Discriminative Training for Multi-modal Large Language Models
Figure 4 for Unified Generative and Discriminative Training for Multi-modal Large Language Models
Viaarxiv icon

GraphCLIP: Enhancing Transferability in Graph Foundation Models for Text-Attributed Graphs

Add code
Oct 15, 2024
Viaarxiv icon

RADAR: Robust Two-stage Modality-incomplete Industrial Anomaly Detection

Add code
Oct 02, 2024
Viaarxiv icon

Towards Unified Multimodal Editing with Enhanced Knowledge Collaboration

Add code
Sep 30, 2024
Viaarxiv icon

Align$^2$LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction Curation

Add code
Sep 27, 2024
Figure 1 for Align$^2$LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction Curation
Figure 2 for Align$^2$LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction Curation
Figure 3 for Align$^2$LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction Curation
Figure 4 for Align$^2$LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction Curation
Viaarxiv icon