Picture for Stuart M. Shieber

Stuart M. Shieber

Harvard University

string2string: A Modern Python Library for String-to-String Algorithms

Add code
Apr 27, 2023
Viaarxiv icon

The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications

Add code
Jul 08, 2022
Figure 1 for The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
Figure 2 for The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
Figure 3 for The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
Figure 4 for The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
Viaarxiv icon

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Add code
Jun 10, 2022
Viaarxiv icon

Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages

Add code
Nov 08, 2019
Figure 1 for Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages
Figure 2 for Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages
Figure 3 for Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages
Figure 4 for Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages
Viaarxiv icon

On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference

Add code
Jul 09, 2019
Figure 1 for On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference
Figure 2 for On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference
Figure 3 for On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference
Figure 4 for On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference
Viaarxiv icon

Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference

Add code
Jul 09, 2019
Figure 1 for Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Figure 2 for Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Figure 3 for Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Figure 4 for Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Viaarxiv icon

LSTM Networks Can Perform Dynamic Counting

Add code
Jun 09, 2019
Figure 1 for LSTM Networks Can Perform Dynamic Counting
Figure 2 for LSTM Networks Can Perform Dynamic Counting
Figure 3 for LSTM Networks Can Perform Dynamic Counting
Figure 4 for LSTM Networks Can Perform Dynamic Counting
Viaarxiv icon

On Evaluating the Generalization of LSTM Models in Formal Languages

Add code
Nov 02, 2018
Figure 1 for On Evaluating the Generalization of LSTM Models in Formal Languages
Figure 2 for On Evaluating the Generalization of LSTM Models in Formal Languages
Figure 3 for On Evaluating the Generalization of LSTM Models in Formal Languages
Figure 4 for On Evaluating the Generalization of LSTM Models in Formal Languages
Viaarxiv icon

Learning Neural Templates for Text Generation

Add code
Sep 13, 2018
Figure 1 for Learning Neural Templates for Text Generation
Figure 2 for Learning Neural Templates for Text Generation
Figure 3 for Learning Neural Templates for Text Generation
Figure 4 for Learning Neural Templates for Text Generation
Viaarxiv icon

Adapting Sequence Models for Sentence Correction

Add code
Jul 27, 2017
Figure 1 for Adapting Sequence Models for Sentence Correction
Figure 2 for Adapting Sequence Models for Sentence Correction
Figure 3 for Adapting Sequence Models for Sentence Correction
Figure 4 for Adapting Sequence Models for Sentence Correction
Viaarxiv icon