Picture for Jiacheng Ruan

Jiacheng Ruan

EIAD: Explainable Industrial Anomaly Detection Via Multi-Modal Large Language Models

Add code
Mar 18, 2025
Viaarxiv icon

ReviewAgents: Bridging the Gap Between Human and AI-Generated Paper Reviews

Add code
Mar 11, 2025
Viaarxiv icon

VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward Models

Add code
Mar 10, 2025
Viaarxiv icon

From Motion Signals to Insights: A Unified Framework for Student Behavior Analysis and Feedback in Physical Education Classes

Add code
Mar 09, 2025
Viaarxiv icon

FTII-Bench: A Comprehensive Multimodal Benchmark for Flow Text with Image Insertion

Add code
Oct 16, 2024
Figure 1 for FTII-Bench: A Comprehensive Multimodal Benchmark for Flow Text with Image Insertion
Figure 2 for FTII-Bench: A Comprehensive Multimodal Benchmark for Flow Text with Image Insertion
Figure 3 for FTII-Bench: A Comprehensive Multimodal Benchmark for Flow Text with Image Insertion
Figure 4 for FTII-Bench: A Comprehensive Multimodal Benchmark for Flow Text with Image Insertion
Viaarxiv icon

Understanding Robustness of Parameter-Efficient Tuning for Image Classification

Add code
Oct 13, 2024
Viaarxiv icon

MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object Scenarios

Add code
Sep 24, 2024
Figure 1 for MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object Scenarios
Figure 2 for MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object Scenarios
Figure 3 for MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object Scenarios
Figure 4 for MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object Scenarios
Viaarxiv icon

LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training

Add code
Jun 24, 2024
Viaarxiv icon

Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts

Add code
Jun 17, 2024
Figure 1 for Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts
Figure 2 for Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts
Figure 3 for Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts
Figure 4 for Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts
Viaarxiv icon

iDAT: inverse Distillation Adapter-Tuning

Add code
Mar 23, 2024
Viaarxiv icon