Picture for Jannik Strötgen

Jannik Strötgen

Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization

Add code
Oct 03, 2024
Figure 1 for Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization
Figure 2 for Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization
Figure 3 for Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization
Figure 4 for Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization
Viaarxiv icon

Learn it or Leave it: Module Composition and Pruning for Continual Learning

Add code
Jun 26, 2024
Figure 1 for Learn it or Leave it: Module Composition and Pruning for Continual Learning
Figure 2 for Learn it or Leave it: Module Composition and Pruning for Continual Learning
Figure 3 for Learn it or Leave it: Module Composition and Pruning for Continual Learning
Figure 4 for Learn it or Leave it: Module Composition and Pruning for Continual Learning
Viaarxiv icon

Discourse-Aware In-Context Learning for Temporal Expression Normalization

Add code
Apr 11, 2024
Figure 1 for Discourse-Aware In-Context Learning for Temporal Expression Normalization
Figure 2 for Discourse-Aware In-Context Learning for Temporal Expression Normalization
Figure 3 for Discourse-Aware In-Context Learning for Temporal Expression Normalization
Figure 4 for Discourse-Aware In-Context Learning for Temporal Expression Normalization
Viaarxiv icon

Rehearsal-Free Modular and Compositional Continual Learning for Language Models

Add code
Mar 31, 2024
Viaarxiv icon

GradSim: Gradient-Based Language Grouping for Effective Multilingual Training

Add code
Oct 23, 2023
Viaarxiv icon

TADA: Efficient Task-Agnostic Domain Adaptation for Transformers

Add code
May 22, 2023
Viaarxiv icon

NLNDE at SemEval-2023 Task 12: Adaptive Pretraining and Source Language Selection for Low-Resource Multilingual Sentiment Analysis

Add code
Apr 28, 2023
Figure 1 for NLNDE at SemEval-2023 Task 12: Adaptive Pretraining and Source Language Selection for Low-Resource Multilingual Sentiment Analysis
Figure 2 for NLNDE at SemEval-2023 Task 12: Adaptive Pretraining and Source Language Selection for Low-Resource Multilingual Sentiment Analysis
Figure 3 for NLNDE at SemEval-2023 Task 12: Adaptive Pretraining and Source Language Selection for Low-Resource Multilingual Sentiment Analysis
Figure 4 for NLNDE at SemEval-2023 Task 12: Adaptive Pretraining and Source Language Selection for Low-Resource Multilingual Sentiment Analysis
Viaarxiv icon

Multilingual Normalization of Temporal Expressions with Masked Language Models

Add code
May 20, 2022
Figure 1 for Multilingual Normalization of Temporal Expressions with Masked Language Models
Figure 2 for Multilingual Normalization of Temporal Expressions with Masked Language Models
Figure 3 for Multilingual Normalization of Temporal Expressions with Masked Language Models
Figure 4 for Multilingual Normalization of Temporal Expressions with Masked Language Models
Viaarxiv icon

CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain

Add code
Dec 17, 2021
Figure 1 for CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain
Figure 2 for CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain
Figure 3 for CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain
Figure 4 for CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain
Viaarxiv icon

Boosting Transformers for Job Expression Extraction and Classification in a Low-Resource Setting

Add code
Sep 17, 2021
Figure 1 for Boosting Transformers for Job Expression Extraction and Classification in a Low-Resource Setting
Figure 2 for Boosting Transformers for Job Expression Extraction and Classification in a Low-Resource Setting
Figure 3 for Boosting Transformers for Job Expression Extraction and Classification in a Low-Resource Setting
Figure 4 for Boosting Transformers for Job Expression Extraction and Classification in a Low-Resource Setting
Viaarxiv icon