Picture for Fréderic Godin

Fréderic Godin

Zero-Shot Cross-Lingual Sentiment Classification under Distribution Shift: an Exploratory Study

Add code
Nov 11, 2023
Viaarxiv icon

IDAS: Intent Discovery with Abstractive Summarization

Add code
May 31, 2023
Viaarxiv icon

A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders

Add code
Apr 08, 2021
Figure 1 for A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders
Figure 2 for A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders
Figure 3 for A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders
Figure 4 for A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders
Viaarxiv icon

Learning When Not to Answer: A Ternary Reward Structure for Reinforcement Learning based Question Answering

Add code
Apr 03, 2019
Figure 1 for Learning When Not to Answer: A Ternary Reward Structure for Reinforcement Learning based Question Answering
Figure 2 for Learning When Not to Answer: A Ternary Reward Structure for Reinforcement Learning based Question Answering
Figure 3 for Learning When Not to Answer: A Ternary Reward Structure for Reinforcement Learning based Question Answering
Figure 4 for Learning When Not to Answer: A Ternary Reward Structure for Reinforcement Learning based Question Answering
Viaarxiv icon

Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?

Add code
Aug 28, 2018
Figure 1 for Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?
Figure 2 for Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?
Figure 3 for Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?
Figure 4 for Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?
Viaarxiv icon

Predefined Sparseness in Recurrent Sequence Models

Add code
Aug 27, 2018
Figure 1 for Predefined Sparseness in Recurrent Sequence Models
Figure 2 for Predefined Sparseness in Recurrent Sequence Models
Figure 3 for Predefined Sparseness in Recurrent Sequence Models
Figure 4 for Predefined Sparseness in Recurrent Sequence Models
Viaarxiv icon

Dual Rectified Linear Units : A Replacement for Tanh Activation Functions in Quasi-Recurrent Neural Networks

Add code
Oct 31, 2017
Figure 1 for Dual Rectified Linear Units : A Replacement for Tanh Activation Functions in Quasi-Recurrent Neural Networks
Figure 2 for Dual Rectified Linear Units : A Replacement for Tanh Activation Functions in Quasi-Recurrent Neural Networks
Figure 3 for Dual Rectified Linear Units : A Replacement for Tanh Activation Functions in Quasi-Recurrent Neural Networks
Figure 4 for Dual Rectified Linear Units : A Replacement for Tanh Activation Functions in Quasi-Recurrent Neural Networks
Viaarxiv icon

Improving Language Modeling using Densely Connected Recurrent Neural Networks

Add code
Jul 19, 2017
Figure 1 for Improving Language Modeling using Densely Connected Recurrent Neural Networks
Figure 2 for Improving Language Modeling using Densely Connected Recurrent Neural Networks
Figure 3 for Improving Language Modeling using Densely Connected Recurrent Neural Networks
Viaarxiv icon