Picture for Min-Yen Kan

Min-Yen Kan

Columbia University

Reasoning Robustness of LLMs to Adversarial Typographical Errors

Add code
Nov 08, 2024
Figure 1 for Reasoning Robustness of LLMs to Adversarial Typographical Errors
Figure 2 for Reasoning Robustness of LLMs to Adversarial Typographical Errors
Figure 3 for Reasoning Robustness of LLMs to Adversarial Typographical Errors
Figure 4 for Reasoning Robustness of LLMs to Adversarial Typographical Errors
Viaarxiv icon

V-DPO: Mitigating Hallucination in Large Vision Language Models via Vision-Guided Direct Preference Optimization

Add code
Nov 05, 2024
Viaarxiv icon

Multi-expert Prompting Improves Reliability, Safety, and Usefulness of Large Language Models

Add code
Nov 01, 2024
Viaarxiv icon

DataTales: A Benchmark for Real-World Intelligent Data Narration

Add code
Oct 23, 2024
Viaarxiv icon

CCSBench: Evaluating Compositional Controllability in LLMs for Scientific Document Summarization

Add code
Oct 16, 2024
Viaarxiv icon

COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement

Add code
Oct 12, 2024
Viaarxiv icon

MVP-Bench: Can Large Vision--Language Models Conduct Multi-level Visual Perception Like Humans?

Add code
Oct 06, 2024
Viaarxiv icon

TART: An Open-Source Tool-Augmented Framework for Explainable Table-based Reasoning

Add code
Sep 18, 2024
Viaarxiv icon

LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMs

Add code
Aug 16, 2024
Viaarxiv icon

The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models

Add code
Jun 14, 2024
Viaarxiv icon