Picture for Phil Blunsom

Phil Blunsom

BAM! Just Like That: Simple and Efficient Parameter Upcycling for Mixture of Experts

Add code
Aug 15, 2024
Figure 1 for BAM! Just Like That: Simple and Efficient Parameter Upcycling for Mixture of Experts
Figure 2 for BAM! Just Like That: Simple and Efficient Parameter Upcycling for Mixture of Experts
Figure 3 for BAM! Just Like That: Simple and Efficient Parameter Upcycling for Mixture of Experts
Figure 4 for BAM! Just Like That: Simple and Efficient Parameter Upcycling for Mixture of Experts
Viaarxiv icon

Separations in the Representational Capabilities of Transformers and Recurrent Architectures

Add code
Jun 13, 2024
Viaarxiv icon

Improving Reward Models with Synthetic Critiques

Add code
May 31, 2024
Viaarxiv icon

Aya 23: Open Weight Releases to Further Multilingual Progress

Add code
May 23, 2024
Figure 1 for Aya 23: Open Weight Releases to Further Multilingual Progress
Figure 2 for Aya 23: Open Weight Releases to Further Multilingual Progress
Figure 3 for Aya 23: Open Weight Releases to Further Multilingual Progress
Figure 4 for Aya 23: Open Weight Releases to Further Multilingual Progress
Viaarxiv icon

Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model

Add code
Feb 12, 2024
Viaarxiv icon

Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions

Add code
Oct 04, 2023
Viaarxiv icon

Human Feedback is not Gold Standard

Add code
Sep 28, 2023
Figure 1 for Human Feedback is not Gold Standard
Figure 2 for Human Feedback is not Gold Standard
Figure 3 for Human Feedback is not Gold Standard
Figure 4 for Human Feedback is not Gold Standard
Viaarxiv icon

Structural Transfer Learning in NL-to-Bash Semantic Parsers

Add code
Jul 31, 2023
Viaarxiv icon

On "Scientific Debt" in NLP: A Case for More Rigour in Language Model Pre-Training Research

Add code
Jun 05, 2023
Viaarxiv icon

Intriguing Properties of Quantization at Scale

Add code
May 30, 2023
Viaarxiv icon