Picture for Fan Yang

Fan Yang

Robotics Department, University of Michigan

PAFFA: Premeditated Actions For Fast Agents

Add code
Dec 10, 2024
Viaarxiv icon

Neuro-Symbolic Data Generation for Math Reasoning

Add code
Dec 06, 2024
Viaarxiv icon

SDR-GNN: Spectral Domain Reconstruction Graph Neural Network for Incomplete Multimodal Learning in Conversational Emotion Recognition

Add code
Nov 29, 2024
Viaarxiv icon

RoadGen: Generating Road Scenarios for Autonomous Vehicle Testing

Add code
Nov 29, 2024
Viaarxiv icon

HEIE: MLLM-Based Hierarchical Explainable AIGC Image Implausibility Evaluator

Add code
Nov 26, 2024
Viaarxiv icon

Uncertainty-Aware Regression for Socio-Economic Estimation via Multi-View Remote Sensing

Add code
Nov 21, 2024
Viaarxiv icon

Kwai-STaR: Transform LLMs into State-Transition Reasoners

Add code
Nov 07, 2024
Viaarxiv icon

Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation

Add code
Nov 05, 2024
Figure 1 for Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
Figure 2 for Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
Figure 3 for Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
Figure 4 for Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
Viaarxiv icon

Autoformalize Mathematical Statements by Symbolic Equivalence and Semantic Consistency

Add code
Oct 28, 2024
Figure 1 for Autoformalize Mathematical Statements by Symbolic Equivalence and Semantic Consistency
Figure 2 for Autoformalize Mathematical Statements by Symbolic Equivalence and Semantic Consistency
Figure 3 for Autoformalize Mathematical Statements by Symbolic Equivalence and Semantic Consistency
Figure 4 for Autoformalize Mathematical Statements by Symbolic Equivalence and Semantic Consistency
Viaarxiv icon

Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models

Add code
Oct 21, 2024
Figure 1 for Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models
Figure 2 for Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models
Figure 3 for Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models
Figure 4 for Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models
Viaarxiv icon