Picture for Inho Kang

Inho Kang

SLM as Guardian: Pioneering AI Safety with Small Language Models

Add code
May 30, 2024
Figure 1 for SLM as Guardian: Pioneering AI Safety with Small Language Models
Figure 2 for SLM as Guardian: Pioneering AI Safety with Small Language Models
Figure 3 for SLM as Guardian: Pioneering AI Safety with Small Language Models
Figure 4 for SLM as Guardian: Pioneering AI Safety with Small Language Models
Viaarxiv icon

A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance

Add code
Apr 01, 2022
Figure 1 for A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance
Figure 2 for A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance
Figure 3 for A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance
Figure 4 for A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance
Viaarxiv icon

What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers

Add code
Sep 10, 2021
Figure 1 for What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Figure 2 for What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Figure 3 for What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Figure 4 for What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Viaarxiv icon

Self-supervised pre-training and contrastive representation learning for multiple-choice video QA

Add code
Sep 17, 2020
Figure 1 for Self-supervised pre-training and contrastive representation learning for multiple-choice video QA
Figure 2 for Self-supervised pre-training and contrastive representation learning for multiple-choice video QA
Figure 3 for Self-supervised pre-training and contrastive representation learning for multiple-choice video QA
Figure 4 for Self-supervised pre-training and contrastive representation learning for multiple-choice video QA
Viaarxiv icon

Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information

Add code
Nov 02, 2018
Figure 1 for Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
Figure 2 for Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
Figure 3 for Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
Figure 4 for Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
Viaarxiv icon