Picture for Noam Shazeer

Noam Shazeer

PaLM: Scaling Language Modeling with Pathways

Add code
Apr 19, 2022
Figure 1 for PaLM: Scaling Language Modeling with Pathways
Figure 2 for PaLM: Scaling Language Modeling with Pathways
Figure 3 for PaLM: Scaling Language Modeling with Pathways
Figure 4 for PaLM: Scaling Language Modeling with Pathways
Viaarxiv icon

Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$

Add code
Mar 31, 2022
Figure 1 for Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$
Figure 2 for Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$
Viaarxiv icon

Designing Effective Sparse Expert Models

Add code
Feb 17, 2022
Figure 1 for Designing Effective Sparse Expert Models
Figure 2 for Designing Effective Sparse Expert Models
Figure 3 for Designing Effective Sparse Expert Models
Figure 4 for Designing Effective Sparse Expert Models
Viaarxiv icon

LaMDA: Language Models for Dialog Applications

Add code
Feb 10, 2022
Figure 1 for LaMDA: Language Models for Dialog Applications
Figure 2 for LaMDA: Language Models for Dialog Applications
Figure 3 for LaMDA: Language Models for Dialog Applications
Figure 4 for LaMDA: Language Models for Dialog Applications
Viaarxiv icon

Primer: Searching for Efficient Transformers for Language Modeling

Add code
Sep 17, 2021
Figure 1 for Primer: Searching for Efficient Transformers for Language Modeling
Figure 2 for Primer: Searching for Efficient Transformers for Language Modeling
Figure 3 for Primer: Searching for Efficient Transformers for Language Modeling
Figure 4 for Primer: Searching for Efficient Transformers for Language Modeling
Viaarxiv icon

GSPMD: General and Scalable Parallelization for ML Computation Graphs

Add code
May 10, 2021
Figure 1 for GSPMD: General and Scalable Parallelization for ML Computation Graphs
Figure 2 for GSPMD: General and Scalable Parallelization for ML Computation Graphs
Figure 3 for GSPMD: General and Scalable Parallelization for ML Computation Graphs
Figure 4 for GSPMD: General and Scalable Parallelization for ML Computation Graphs
Viaarxiv icon

Do Transformer Modifications Transfer Across Implementations and Applications?

Add code
Feb 23, 2021
Figure 1 for Do Transformer Modifications Transfer Across Implementations and Applications?
Figure 2 for Do Transformer Modifications Transfer Across Implementations and Applications?
Figure 3 for Do Transformer Modifications Transfer Across Implementations and Applications?
Viaarxiv icon

Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity

Add code
Jan 11, 2021
Figure 1 for Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
Figure 2 for Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
Figure 3 for Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
Figure 4 for Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
Viaarxiv icon

GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding

Add code
Jun 30, 2020
Figure 1 for GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Figure 2 for GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Figure 3 for GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Figure 4 for GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Viaarxiv icon

Talking-Heads Attention

Add code
Mar 05, 2020
Figure 1 for Talking-Heads Attention
Figure 2 for Talking-Heads Attention
Figure 3 for Talking-Heads Attention
Figure 4 for Talking-Heads Attention
Viaarxiv icon