Picture for Katerina Margatina

Katerina Margatina

The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models

Add code
Apr 24, 2024
Figure 1 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Figure 2 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Figure 3 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Figure 4 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Viaarxiv icon

Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance?

Add code
Oct 26, 2023
Figure 1 for Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance?
Figure 2 for Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance?
Figure 3 for Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance?
Figure 4 for Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance?
Viaarxiv icon

Active Learning Principles for In-Context Learning with Large Language Models

Add code
May 23, 2023
Viaarxiv icon

On the Limitations of Simulating Active Learning

Add code
May 21, 2023
Viaarxiv icon

Dynamic Benchmarking of Masked Language Models on Temporal Concept Drift with Multiple Views

Add code
Feb 23, 2023
Viaarxiv icon

Investigating Multi-source Active Learning for Natural Language Inference

Add code
Feb 14, 2023
Figure 1 for Investigating Multi-source Active Learning for Natural Language Inference
Figure 2 for Investigating Multi-source Active Learning for Natural Language Inference
Figure 3 for Investigating Multi-source Active Learning for Natural Language Inference
Figure 4 for Investigating Multi-source Active Learning for Natural Language Inference
Viaarxiv icon

Challenges and Strategies in Cross-Cultural NLP

Add code
Mar 18, 2022
Figure 1 for Challenges and Strategies in Cross-Cultural NLP
Viaarxiv icon

Active Learning by Acquiring Contrastive Examples

Add code
Sep 08, 2021
Figure 1 for Active Learning by Acquiring Contrastive Examples
Figure 2 for Active Learning by Acquiring Contrastive Examples
Figure 3 for Active Learning by Acquiring Contrastive Examples
Figure 4 for Active Learning by Acquiring Contrastive Examples
Viaarxiv icon

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Add code
Sep 04, 2021
Figure 1 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 2 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 3 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 4 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Viaarxiv icon

Bayesian Active Learning with Pretrained Language Models

Add code
Apr 16, 2021
Figure 1 for Bayesian Active Learning with Pretrained Language Models
Figure 2 for Bayesian Active Learning with Pretrained Language Models
Figure 3 for Bayesian Active Learning with Pretrained Language Models
Figure 4 for Bayesian Active Learning with Pretrained Language Models
Viaarxiv icon