Picture for Malihe Alikhani

Malihe Alikhani

Measuring How (Not Just Whether) VLMs Build Common Ground

Add code
Sep 04, 2025
Viaarxiv icon

SiLVERScore: Semantically-Aware Embeddings for Sign Language Generation Evaluation

Add code
Sep 04, 2025
Viaarxiv icon

Quantifying Sycophancy as Deviations from Bayesian Rationality in LLMs

Add code
Aug 23, 2025
Viaarxiv icon

OPeRA: A Dataset of Observation, Persona, Rationale, and Action for Evaluating LLMs on Human Online Shopping Behavior Simulation

Add code
Jun 05, 2025
Viaarxiv icon

Human-centered explanation does not fit all: The interplay of sociotechnical, cognitive, and individual factors in the effect AI explanations in algorithmic decision-making

Add code
Feb 17, 2025
Viaarxiv icon

Better Slow than Sorry: Introducing Positive Friction for Reliable Dialogue Systems

Add code
Jan 31, 2025
Figure 1 for Better Slow than Sorry: Introducing Positive Friction for Reliable Dialogue Systems
Figure 2 for Better Slow than Sorry: Introducing Positive Friction for Reliable Dialogue Systems
Figure 3 for Better Slow than Sorry: Introducing Positive Friction for Reliable Dialogue Systems
Figure 4 for Better Slow than Sorry: Introducing Positive Friction for Reliable Dialogue Systems
Viaarxiv icon

Contextual ASR Error Handling with LLMs Augmentation for Goal-Oriented Conversational AI

Add code
Jan 10, 2025
Figure 1 for Contextual ASR Error Handling with LLMs Augmentation for Goal-Oriented Conversational AI
Figure 2 for Contextual ASR Error Handling with LLMs Augmentation for Goal-Oriented Conversational AI
Figure 3 for Contextual ASR Error Handling with LLMs Augmentation for Goal-Oriented Conversational AI
Figure 4 for Contextual ASR Error Handling with LLMs Augmentation for Goal-Oriented Conversational AI
Viaarxiv icon

Fairness at Every Intersection: Uncovering and Mitigating Intersectional Biases in Multimodal Clinical Predictions

Add code
Nov 30, 2024
Figure 1 for Fairness at Every Intersection: Uncovering and Mitigating Intersectional Biases in Multimodal Clinical Predictions
Figure 2 for Fairness at Every Intersection: Uncovering and Mitigating Intersectional Biases in Multimodal Clinical Predictions
Figure 3 for Fairness at Every Intersection: Uncovering and Mitigating Intersectional Biases in Multimodal Clinical Predictions
Figure 4 for Fairness at Every Intersection: Uncovering and Mitigating Intersectional Biases in Multimodal Clinical Predictions
Viaarxiv icon

Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents

Add code
Oct 18, 2024
Figure 1 for Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents
Figure 2 for Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents
Figure 3 for Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents
Figure 4 for Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents
Viaarxiv icon

Eliciting Uncertainty in Chain-of-Thought to Mitigate Bias against Forecasting Harmful User Behaviors

Add code
Oct 17, 2024
Viaarxiv icon