Picture for Haijin Liang

Haijin Liang

Best Practices for Distilling Large Language Models into BERT for Web Search Ranking

Add code
Nov 07, 2024
Viaarxiv icon

Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing

Add code
Aug 22, 2022
Figure 1 for Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing
Figure 2 for Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing
Figure 3 for Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing
Figure 4 for Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing
Viaarxiv icon

ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding

Add code
Aug 05, 2022
Figure 1 for ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding
Figure 2 for ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding
Figure 3 for ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding
Figure 4 for ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding
Viaarxiv icon

Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models

Add code
Jun 11, 2022
Figure 1 for Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models
Figure 2 for Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models
Figure 3 for Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models
Figure 4 for Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models
Viaarxiv icon