Picture for Qianli Wang

Qianli Wang

Locate, Steer, and Improve: A Practical Survey of Actionable Mechanistic Interpretability in Large Language Models

Add code
Jan 20, 2026
Viaarxiv icon

Order in the Evaluation Court: A Critical Analysis of NLG Evaluation Trends

Add code
Jan 12, 2026
Viaarxiv icon

eTracer: Towards Traceable Text Generation via Claim-Level Grounding

Add code
Jan 07, 2026
Viaarxiv icon

iFlip: Iterative Feedback-driven Counterfactual Example Refinement

Add code
Jan 04, 2026
Viaarxiv icon

Parallel Universes, Parallel Languages: A Comprehensive Study on LLM-based Multilingual Counterfactual Example Generation

Add code
Jan 01, 2026
Viaarxiv icon

Can Large Language Models Still Explain Themselves? Investigating the Impact of Quantization on Self-Explanations

Add code
Jan 01, 2026
Viaarxiv icon

Multilingual Datasets for Custom Input Extraction and Explanation Requests Parsing in Conversational XAI Systems

Add code
Aug 20, 2025
Viaarxiv icon

Truth or Twist? Optimal Model Selection for Reliable Label Flipping Evaluation in LLM-based Counterfactuals

Add code
May 20, 2025
Viaarxiv icon

Through a Compressed Lens: Investigating the Impact of Quantization on LLM Explainability and Interpretability

Add code
May 20, 2025
Viaarxiv icon

Cross-Frame OTFS Parameter Estimation Based On Chinese Remainder Theorem

Add code
Apr 07, 2025
Viaarxiv icon