Picture for Llion Jones

Llion Jones

The Ungrounded Alignment Problem

Add code
Aug 08, 2024
Viaarxiv icon

Transformer Layers as Painters

Add code
Jul 12, 2024
Viaarxiv icon

Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation

Add code
Oct 18, 2022
Figure 1 for Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation
Figure 2 for Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation
Figure 3 for Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation
Figure 4 for Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation
Viaarxiv icon

DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement

Add code
Jun 30, 2021
Figure 1 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Figure 2 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Figure 3 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Figure 4 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Viaarxiv icon

A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition

Add code
Jun 09, 2021
Figure 1 for A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition
Figure 2 for A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition
Figure 3 for A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition
Figure 4 for A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition
Viaarxiv icon

CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing

Add code
Apr 06, 2021
Figure 1 for CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 2 for CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 3 for CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 4 for CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing
Viaarxiv icon

ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing

Add code
Jul 20, 2020
Figure 1 for ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 2 for ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 3 for ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 4 for ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Viaarxiv icon

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

Add code
Feb 21, 2019
Figure 1 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 2 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 3 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Viaarxiv icon

Character-Level Language Modeling with Deeper Self-Attention

Add code
Aug 09, 2018
Figure 1 for Character-Level Language Modeling with Deeper Self-Attention
Figure 2 for Character-Level Language Modeling with Deeper Self-Attention
Figure 3 for Character-Level Language Modeling with Deeper Self-Attention
Figure 4 for Character-Level Language Modeling with Deeper Self-Attention
Viaarxiv icon

The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation

Add code
Apr 27, 2018
Figure 1 for The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
Figure 2 for The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
Figure 3 for The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
Figure 4 for The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
Viaarxiv icon