Picture for Paul Smolensky

Paul Smolensky

Mechanisms of Symbol Processing for In-Context Learning in Transformer Networks

Add code
Oct 23, 2024
Viaarxiv icon

Implicit Chain of Thought Reasoning via Knowledge Distillation

Add code
Nov 02, 2023
Viaarxiv icon

Differentiable Tree Operations Promote Compositional Generalization

Add code
Jun 01, 2023
Viaarxiv icon

Uncontrolled Lexical Exposure Leads to Overestimation of Compositional Generalization in Pretrained Models

Add code
Dec 21, 2022
Viaarxiv icon

Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages

Add code
Aug 11, 2022
Figure 1 for Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Figure 2 for Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Figure 3 for Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Figure 4 for Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Viaarxiv icon

Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems

Add code
May 02, 2022
Figure 1 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Figure 2 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Figure 3 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Figure 4 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Viaarxiv icon

How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN

Add code
Nov 18, 2021
Figure 1 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Figure 2 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Figure 3 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Figure 4 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Viaarxiv icon

Distributed neural encoding of binding to thematic roles

Add code
Oct 24, 2021
Figure 1 for Distributed neural encoding of binding to thematic roles
Figure 2 for Distributed neural encoding of binding to thematic roles
Figure 3 for Distributed neural encoding of binding to thematic roles
Figure 4 for Distributed neural encoding of binding to thematic roles
Viaarxiv icon

Scalable knowledge base completion with superposition memories

Add code
Oct 24, 2021
Figure 1 for Scalable knowledge base completion with superposition memories
Figure 2 for Scalable knowledge base completion with superposition memories
Figure 3 for Scalable knowledge base completion with superposition memories
Figure 4 for Scalable knowledge base completion with superposition memories
Viaarxiv icon

Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization

Add code
Jun 02, 2021
Figure 1 for Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Figure 2 for Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Figure 3 for Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Figure 4 for Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Viaarxiv icon