Picture for Urmish Thakker

Urmish Thakker

SubgoalXL: Subgoal-based Expert Learning for Theorem Proving

Add code
Aug 20, 2024
Figure 1 for SubgoalXL: Subgoal-based Expert Learning for Theorem Proving
Figure 2 for SubgoalXL: Subgoal-based Expert Learning for Theorem Proving
Figure 3 for SubgoalXL: Subgoal-based Expert Learning for Theorem Proving
Figure 4 for SubgoalXL: Subgoal-based Expert Learning for Theorem Proving
Viaarxiv icon

SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts

Add code
May 13, 2024
Figure 1 for SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts
Figure 2 for SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts
Figure 3 for SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts
Figure 4 for SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts
Viaarxiv icon

SambaLingo: Teaching Large Language Models New Languages

Add code
Apr 08, 2024
Figure 1 for SambaLingo: Teaching Large Language Models New Languages
Figure 2 for SambaLingo: Teaching Large Language Models New Languages
Figure 3 for SambaLingo: Teaching Large Language Models New Languages
Figure 4 for SambaLingo: Teaching Large Language Models New Languages
Viaarxiv icon

Efficiently Adapting Pretrained Language Models To New Languages

Add code
Nov 09, 2023
Figure 1 for Efficiently Adapting Pretrained Language Models To New Languages
Figure 2 for Efficiently Adapting Pretrained Language Models To New Languages
Figure 3 for Efficiently Adapting Pretrained Language Models To New Languages
Figure 4 for Efficiently Adapting Pretrained Language Models To New Languages
Viaarxiv icon

Training Large Language Models Efficiently with Sparsity and Dataflow

Add code
Apr 11, 2023
Viaarxiv icon

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

Add code
Nov 09, 2022
Viaarxiv icon

PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts

Add code
Feb 02, 2022
Figure 1 for PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Figure 2 for PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Figure 3 for PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Figure 4 for PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Viaarxiv icon

Multitask Prompted Training Enables Zero-Shot Task Generalization

Add code
Oct 15, 2021
Figure 1 for Multitask Prompted Training Enables Zero-Shot Task Generalization
Figure 2 for Multitask Prompted Training Enables Zero-Shot Task Generalization
Figure 3 for Multitask Prompted Training Enables Zero-Shot Task Generalization
Figure 4 for Multitask Prompted Training Enables Zero-Shot Task Generalization
Viaarxiv icon

MLPerf Tiny Benchmark

Add code
Jun 28, 2021
Figure 1 for MLPerf Tiny Benchmark
Figure 2 for MLPerf Tiny Benchmark
Figure 3 for MLPerf Tiny Benchmark
Figure 4 for MLPerf Tiny Benchmark
Viaarxiv icon

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices

Add code
Feb 14, 2021
Figure 1 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 2 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 3 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 4 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Viaarxiv icon