Picture for Xinyi Yang

Xinyi Yang

AnalogSeeker: An Open-source Foundation Language Model for Analog Circuit Design

Add code
Aug 14, 2025
Viaarxiv icon

Rethinking Prompt-based Debiasing in Large Language Models

Add code
Mar 12, 2025
Figure 1 for Rethinking Prompt-based Debiasing in Large Language Models
Figure 2 for Rethinking Prompt-based Debiasing in Large Language Models
Figure 3 for Rethinking Prompt-based Debiasing in Large Language Models
Figure 4 for Rethinking Prompt-based Debiasing in Large Language Models
Viaarxiv icon

Multi-Robot System for Cooperative Exploration in Unknown Environments: A Survey

Add code
Mar 10, 2025
Viaarxiv icon

Policy-to-Language: Train LLMs to Explain Decisions with Flow-Matching Generated Rewards

Add code
Feb 18, 2025
Viaarxiv icon

Learning to Plan with Personalized Preferences

Add code
Feb 02, 2025
Viaarxiv icon

DENIAHL: In-Context Features Influence LLM Needle-In-A-Haystack Abilities

Add code
Nov 28, 2024
Figure 1 for DENIAHL: In-Context Features Influence LLM Needle-In-A-Haystack Abilities
Figure 2 for DENIAHL: In-Context Features Influence LLM Needle-In-A-Haystack Abilities
Figure 3 for DENIAHL: In-Context Features Influence LLM Needle-In-A-Haystack Abilities
Figure 4 for DENIAHL: In-Context Features Influence LLM Needle-In-A-Haystack Abilities
Viaarxiv icon

3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing

Add code
Oct 31, 2024
Figure 1 for 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing
Figure 2 for 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing
Figure 3 for 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing
Figure 4 for 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing
Viaarxiv icon

DetectRL: Benchmarking LLM-Generated Text Detection in Real-World Scenarios

Add code
Oct 31, 2024
Viaarxiv icon

VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks

Add code
Oct 07, 2024
Figure 1 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Figure 2 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Figure 3 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Figure 4 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Viaarxiv icon

ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement

Add code
Oct 03, 2024
Viaarxiv icon