Picture for Ruiyang Qin

Ruiyang Qin

Combating Partial Perception Deficit in Autonomous Driving with Multimodal LLM Commonsense

Add code
Mar 10, 2025
Viaarxiv icon

Recognize Any Surgical Object: Unleashing the Power of Weakly-Supervised Data

Add code
Jan 25, 2025
Figure 1 for Recognize Any Surgical Object: Unleashing the Power of Weakly-Supervised Data
Figure 2 for Recognize Any Surgical Object: Unleashing the Power of Weakly-Supervised Data
Figure 3 for Recognize Any Surgical Object: Unleashing the Power of Weakly-Supervised Data
Figure 4 for Recognize Any Surgical Object: Unleashing the Power of Weakly-Supervised Data
Viaarxiv icon

Tiny-Align: Bridging Automatic Speech Recognition and Large Language Model on the Edge

Add code
Nov 21, 2024
Figure 1 for Tiny-Align: Bridging Automatic Speech Recognition and Large Language Model on the Edge
Figure 2 for Tiny-Align: Bridging Automatic Speech Recognition and Large Language Model on the Edge
Figure 3 for Tiny-Align: Bridging Automatic Speech Recognition and Large Language Model on the Edge
Figure 4 for Tiny-Align: Bridging Automatic Speech Recognition and Large Language Model on the Edge
Viaarxiv icon

NVCiM-PT: An NVCiM-assisted Prompt Tuning Framework for Edge LLMs

Add code
Nov 12, 2024
Figure 1 for NVCiM-PT: An NVCiM-assisted Prompt Tuning Framework for Edge LLMs
Figure 2 for NVCiM-PT: An NVCiM-assisted Prompt Tuning Framework for Edge LLMs
Figure 3 for NVCiM-PT: An NVCiM-assisted Prompt Tuning Framework for Edge LLMs
Figure 4 for NVCiM-PT: An NVCiM-assisted Prompt Tuning Framework for Edge LLMs
Viaarxiv icon

An Adaptive System for Wearable Devices to Detect Stress Using Physiological Signals

Add code
Jul 21, 2024
Viaarxiv icon

Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices

Add code
Jun 06, 2024
Figure 1 for Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices
Figure 2 for Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices
Figure 3 for Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices
Figure 4 for Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices
Viaarxiv icon

Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures

Add code
May 07, 2024
Figure 1 for Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures
Figure 2 for Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures
Figure 3 for Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures
Figure 4 for Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures
Viaarxiv icon

FL-NAS: Towards Fairness of NAS for Resource Constrained Devices via Large Language Models

Add code
Feb 09, 2024
Viaarxiv icon

Enabling On-Device Large Language Model Personalization with Self-Supervised Data Selection and Synthesis

Add code
Dec 02, 2023
Viaarxiv icon

When Automated Assessment Meets Automated Content Generation: Examining Text Quality in the Era of GPTs

Add code
Sep 25, 2023
Viaarxiv icon