Picture for Yiqian Yang

Yiqian Yang

Efficient Gravitational Wave Parameter Estimation via Knowledge Distillation: A ResNet1D-IAF Approach

Add code
Dec 11, 2024
Viaarxiv icon

Adaptive Epsilon Adversarial Training for Robust Gravitational Wave Parameter Estimation Using Normalizing Flows

Add code
Dec 10, 2024
Viaarxiv icon

NeuGPT: Unified multi-modal Neural GPT

Add code
Oct 28, 2024
Viaarxiv icon

E2H: A Two-Stage Non-Invasive Neural Signal Driven Humanoid Robotic Whole-Body Control Framework

Add code
Oct 03, 2024
Figure 1 for E2H: A Two-Stage Non-Invasive Neural Signal Driven Humanoid Robotic Whole-Body Control Framework
Figure 2 for E2H: A Two-Stage Non-Invasive Neural Signal Driven Humanoid Robotic Whole-Body Control Framework
Figure 3 for E2H: A Two-Stage Non-Invasive Neural Signal Driven Humanoid Robotic Whole-Body Control Framework
Figure 4 for E2H: A Two-Stage Non-Invasive Neural Signal Driven Humanoid Robotic Whole-Body Control Framework
Viaarxiv icon

MAD: Multi-Alignment MEG-to-Text Decoding

Add code
Jun 03, 2024
Viaarxiv icon

Are EEG-to-Text Models Working?

Add code
May 10, 2024
Viaarxiv icon

Decode Neural signal as Speech

Add code
Mar 04, 2024
Viaarxiv icon

Mapping EEG Signals to Visual Stimuli: A Deep Learning Approach to Match vs. Mismatch Classification

Add code
Sep 08, 2023
Viaarxiv icon

All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment

Add code
Jul 07, 2023
Figure 1 for All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment
Figure 2 for All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment
Figure 3 for All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment
Figure 4 for All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment
Viaarxiv icon

A Comprehensive Survey on Segment Anything Model for Vision and Beyond

Add code
May 19, 2023
Viaarxiv icon