Picture for Artem Chernodub

Artem Chernodub

Spivavtor: An Instruction Tuned Ukrainian Text Editing Model

Add code
Apr 29, 2024
Viaarxiv icon

Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models

Add code
Apr 23, 2024
Viaarxiv icon

Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization

Add code
Jun 08, 2023
Figure 1 for Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization
Figure 2 for Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization
Figure 3 for Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization
Figure 4 for Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization
Viaarxiv icon

Ensembling and Knowledge Distilling of Large Sequence Taggers for Grammatical Error Correction

Add code
Mar 24, 2022
Figure 1 for Ensembling and Knowledge Distilling of Large Sequence Taggers for Grammatical Error Correction
Figure 2 for Ensembling and Knowledge Distilling of Large Sequence Taggers for Grammatical Error Correction
Figure 3 for Ensembling and Knowledge Distilling of Large Sequence Taggers for Grammatical Error Correction
Figure 4 for Ensembling and Knowledge Distilling of Large Sequence Taggers for Grammatical Error Correction
Viaarxiv icon

GECToR -- Grammatical Error Correction: Tag, Not Rewrite

Add code
May 29, 2020
Figure 1 for GECToR -- Grammatical Error Correction: Tag, Not Rewrite
Figure 2 for GECToR -- Grammatical Error Correction: Tag, Not Rewrite
Figure 3 for GECToR -- Grammatical Error Correction: Tag, Not Rewrite
Figure 4 for GECToR -- Grammatical Error Correction: Tag, Not Rewrite
Viaarxiv icon

Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks

Add code
Feb 13, 2017
Figure 1 for Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks
Figure 2 for Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks
Figure 3 for Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks
Figure 4 for Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks
Viaarxiv icon

Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)

Add code
Jan 31, 2017
Figure 1 for Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)
Figure 2 for Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)
Figure 3 for Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)
Figure 4 for Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)
Viaarxiv icon

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi-Step-Ahead Predictions

Add code
May 12, 2016
Figure 1 for Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi-Step-Ahead Predictions
Figure 2 for Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi-Step-Ahead Predictions
Figure 3 for Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi-Step-Ahead Predictions
Viaarxiv icon

Neurocontrol methods review

Add code
Nov 17, 2015
Viaarxiv icon