Picture for Qian Lou

Qian Lou

FHAIM: Fully Homomorphic AIM For Private Synthetic Data Generation

Add code
Feb 05, 2026
Viaarxiv icon

R2-Router: A New Paradigm for LLM Routing with Reasoning

Add code
Feb 02, 2026
Viaarxiv icon

RPP: A Certified Poisoned-Sample Detection Framework for Backdoor Attacks under Dataset Imbalance

Add code
Jan 30, 2026
Viaarxiv icon

Learning Latency-Aware Orchestration for Parallel Multi-Agent Systems

Add code
Jan 15, 2026
Viaarxiv icon

Factuality Beyond Coherence: Evaluating LLM Watermarking Methods for Medical Texts

Add code
Sep 09, 2025
Viaarxiv icon

TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation

Add code
Mar 15, 2025
Figure 1 for TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation
Figure 2 for TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation
Figure 3 for TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation
Figure 4 for TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation
Viaarxiv icon

CipherPrune: Efficient and Scalable Private Transformer Inference

Add code
Feb 24, 2025
Figure 1 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 2 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 3 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 4 for CipherPrune: Efficient and Scalable Private Transformer Inference
Viaarxiv icon

Uncovering the Hidden Threat of Text Watermarking from Users with Cross-Lingual Knowledge

Add code
Feb 23, 2025
Figure 1 for Uncovering the Hidden Threat of Text Watermarking from Users with Cross-Lingual Knowledge
Figure 2 for Uncovering the Hidden Threat of Text Watermarking from Users with Cross-Lingual Knowledge
Figure 3 for Uncovering the Hidden Threat of Text Watermarking from Users with Cross-Lingual Knowledge
Figure 4 for Uncovering the Hidden Threat of Text Watermarking from Users with Cross-Lingual Knowledge
Viaarxiv icon

Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare

Add code
Jan 27, 2025
Figure 1 for Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare
Figure 2 for Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare
Figure 3 for Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare
Figure 4 for Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare
Viaarxiv icon

freePruner: A Training-free Approach for Large Multimodal Model Acceleration

Add code
Nov 23, 2024
Figure 1 for freePruner: A Training-free Approach for Large Multimodal Model Acceleration
Figure 2 for freePruner: A Training-free Approach for Large Multimodal Model Acceleration
Figure 3 for freePruner: A Training-free Approach for Large Multimodal Model Acceleration
Figure 4 for freePruner: A Training-free Approach for Large Multimodal Model Acceleration
Viaarxiv icon