Picture for Ruixiang Tang

Ruixiang Tang

EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory Image Tokens

Add code
Mar 10, 2025
Viaarxiv icon

DBR: Divergence-Based Regularization for Debiasing Natural Language Understanding Models

Add code
Feb 25, 2025
Viaarxiv icon

Can Large Vision-Language Models Detect Images Copyright Infringement from GenAI?

Add code
Feb 23, 2025
Viaarxiv icon

Massive Values in Self-Attention Modules are the Key to Contextual Knowledge Understanding

Add code
Feb 03, 2025
Viaarxiv icon

Survey and Improvement Strategies for Gene Prioritization with Large Language Models

Add code
Jan 30, 2025
Figure 1 for Survey and Improvement Strategies for Gene Prioritization with Large Language Models
Figure 2 for Survey and Improvement Strategies for Gene Prioritization with Large Language Models
Figure 3 for Survey and Improvement Strategies for Gene Prioritization with Large Language Models
Viaarxiv icon

Decoding Knowledge in Large Language Models: A Framework for Categorization and Comprehension

Add code
Jan 02, 2025
Viaarxiv icon

Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics

Add code
Nov 22, 2024
Figure 1 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Figure 2 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Figure 3 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Figure 4 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Viaarxiv icon

Disentangling Memory and Reasoning Ability in Large Language Models

Add code
Nov 21, 2024
Viaarxiv icon

When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations

Add code
Nov 19, 2024
Figure 1 for When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations
Figure 2 for When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations
Figure 3 for When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations
Figure 4 for When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations
Viaarxiv icon

Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion

Add code
Oct 06, 2024
Figure 1 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 2 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 3 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 4 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Viaarxiv icon