Picture for Zhen Xiang

Zhen Xiang

Q-realign: Piggybacking Realignment on Quantization for Safe and Efficient LLM Deployment

Add code
Jan 13, 2026
Viaarxiv icon

RadFabric: Agentic AI System with Reasoning Capability for Radiology

Add code
Jun 17, 2025
Viaarxiv icon

CDR-Agent: Intelligent Selection and Execution of Clinical Decision Rules Using Large Language Model Agents

Add code
May 29, 2025
Viaarxiv icon

SOSBENCH: Benchmarking Safety Alignment on Scientific Knowledge

Add code
May 27, 2025
Viaarxiv icon

How Memory Management Impacts LLM Agents: An Empirical Study of Experience-Following Behavior

Add code
May 21, 2025
Figure 1 for How Memory Management Impacts LLM Agents: An Empirical Study of Experience-Following Behavior
Figure 2 for How Memory Management Impacts LLM Agents: An Empirical Study of Experience-Following Behavior
Figure 3 for How Memory Management Impacts LLM Agents: An Empirical Study of Experience-Following Behavior
Figure 4 for How Memory Management Impacts LLM Agents: An Empirical Study of Experience-Following Behavior
Viaarxiv icon

Doxing via the Lens: Revealing Privacy Leakage in Image Geolocation for Agentic Multi-Modal Large Reasoning Model

Add code
Apr 29, 2025
Viaarxiv icon

Large Language Model Empowered Privacy-Protected Framework for PHI Annotation in Clinical Notes

Add code
Apr 22, 2025
Figure 1 for Large Language Model Empowered Privacy-Protected Framework for PHI Annotation in Clinical Notes
Viaarxiv icon

MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models

Add code
Mar 19, 2025
Viaarxiv icon

A Practical Memory Injection Attack against LLM Agents

Add code
Mar 05, 2025
Figure 1 for A Practical Memory Injection Attack against LLM Agents
Figure 2 for A Practical Memory Injection Attack against LLM Agents
Figure 3 for A Practical Memory Injection Attack against LLM Agents
Figure 4 for A Practical Memory Injection Attack against LLM Agents
Viaarxiv icon

Multi-Faceted Studies on Data Poisoning can Advance LLM Development

Add code
Feb 20, 2025
Figure 1 for Multi-Faceted Studies on Data Poisoning can Advance LLM Development
Figure 2 for Multi-Faceted Studies on Data Poisoning can Advance LLM Development
Figure 3 for Multi-Faceted Studies on Data Poisoning can Advance LLM Development
Viaarxiv icon