Picture for Xiangjue Dong

Xiangjue Dong

DisastQA: A Comprehensive Benchmark for Evaluating Question Answering in Disaster Management

Add code
Jan 07, 2026
Viaarxiv icon

CHOIR: Collaborative Harmonization fOr Inference Robustness

Add code
Oct 26, 2025
Figure 1 for CHOIR: Collaborative Harmonization fOr Inference Robustness
Figure 2 for CHOIR: Collaborative Harmonization fOr Inference Robustness
Figure 3 for CHOIR: Collaborative Harmonization fOr Inference Robustness
Figure 4 for CHOIR: Collaborative Harmonization fOr Inference Robustness
Viaarxiv icon

DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management

Add code
May 20, 2025
Figure 1 for DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management
Figure 2 for DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management
Figure 3 for DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management
Figure 4 for DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management
Viaarxiv icon

Masculine Defaults via Gendered Discourse in Podcasts and Large Language Models

Add code
Apr 15, 2025
Viaarxiv icon

A Survey on LLM Inference-Time Self-Improvement

Add code
Dec 18, 2024
Figure 1 for A Survey on LLM Inference-Time Self-Improvement
Figure 2 for A Survey on LLM Inference-Time Self-Improvement
Figure 3 for A Survey on LLM Inference-Time Self-Improvement
Viaarxiv icon

ReasoningRec: Bridging Personalized Recommendations and Human-Interpretable Explanations through LLM Reasoning

Add code
Oct 30, 2024
Figure 1 for ReasoningRec: Bridging Personalized Recommendations and Human-Interpretable Explanations through LLM Reasoning
Figure 2 for ReasoningRec: Bridging Personalized Recommendations and Human-Interpretable Explanations through LLM Reasoning
Figure 3 for ReasoningRec: Bridging Personalized Recommendations and Human-Interpretable Explanations through LLM Reasoning
Figure 4 for ReasoningRec: Bridging Personalized Recommendations and Human-Interpretable Explanations through LLM Reasoning
Viaarxiv icon

Disclosure and Mitigation of Gender Bias in LLMs

Add code
Feb 17, 2024
Figure 1 for Disclosure and Mitigation of Gender Bias in LLMs
Figure 2 for Disclosure and Mitigation of Gender Bias in LLMs
Figure 3 for Disclosure and Mitigation of Gender Bias in LLMs
Figure 4 for Disclosure and Mitigation of Gender Bias in LLMs
Viaarxiv icon

The Neglected Tails of Vision-Language Models

Add code
Feb 02, 2024
Figure 1 for The Neglected Tails of Vision-Language Models
Figure 2 for The Neglected Tails of Vision-Language Models
Figure 3 for The Neglected Tails of Vision-Language Models
Figure 4 for The Neglected Tails of Vision-Language Models
Viaarxiv icon

DALA: A Distribution-Aware LoRA-Based Adversarial Attack against Pre-trained Language Models

Add code
Nov 14, 2023
Figure 1 for DALA: A Distribution-Aware LoRA-Based Adversarial Attack against Pre-trained Language Models
Figure 2 for DALA: A Distribution-Aware LoRA-Based Adversarial Attack against Pre-trained Language Models
Figure 3 for DALA: A Distribution-Aware LoRA-Based Adversarial Attack against Pre-trained Language Models
Figure 4 for DALA: A Distribution-Aware LoRA-Based Adversarial Attack against Pre-trained Language Models
Viaarxiv icon

Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation

Add code
Nov 01, 2023
Figure 1 for Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation
Figure 2 for Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation
Figure 3 for Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation
Figure 4 for Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation
Viaarxiv icon