Picture for Liming Zhan

Liming Zhan

GEMeX: A Large-Scale, Groundable, and Explainable Medical VQA Benchmark for Chest X-ray Diagnosis

Add code
Nov 25, 2024
Viaarxiv icon

Diversity-grounded Channel Prototypical Learning for Out-of-Distribution Intent Detection

Add code
Sep 17, 2024
Viaarxiv icon

Towards LLM-driven Dialogue State Tracking

Add code
Oct 23, 2023
Viaarxiv icon

How Good Are Large Language Models at Out-of-Distribution Detection?

Add code
Aug 23, 2023
Figure 1 for How Good Are Large Language Models at Out-of-Distribution Detection?
Figure 2 for How Good Are Large Language Models at Out-of-Distribution Detection?
Figure 3 for How Good Are Large Language Models at Out-of-Distribution Detection?
Figure 4 for How Good Are Large Language Models at Out-of-Distribution Detection?
Viaarxiv icon

Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training

Add code
Jun 08, 2023
Viaarxiv icon

Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization

Add code
May 15, 2022
Figure 1 for Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization
Figure 2 for Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization
Figure 3 for Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization
Figure 4 for Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization
Viaarxiv icon