Picture for Yee Whye Teh

Yee Whye Teh

University College London

Prompting Strategies for Enabling Large Language Models to Infer Causation from Correlation

Add code
Dec 18, 2024
Figure 1 for Prompting Strategies for Enabling Large Language Models to Infer Causation from Correlation
Figure 2 for Prompting Strategies for Enabling Large Language Models to Infer Causation from Correlation
Figure 3 for Prompting Strategies for Enabling Large Language Models to Infer Causation from Correlation
Figure 4 for Prompting Strategies for Enabling Large Language Models to Infer Causation from Correlation
Viaarxiv icon

Learning Loss Landscapes in Preference Optimization

Add code
Nov 10, 2024
Viaarxiv icon

Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset

Add code
Nov 06, 2024
Figure 1 for Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset
Figure 2 for Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset
Figure 3 for Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset
Figure 4 for Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset
Viaarxiv icon

L3Ms -- Lagrange Large Language Models

Add code
Oct 28, 2024
Viaarxiv icon

SymDiff: Equivariant Diffusion via Stochastic Symmetrisation

Add code
Oct 08, 2024
Viaarxiv icon

Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design

Add code
Jul 16, 2024
Figure 1 for Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design
Figure 2 for Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design
Figure 3 for Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design
Figure 4 for Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design
Viaarxiv icon

EvIL: Evolution Strategies for Generalisable Imitation Learning

Add code
Jun 15, 2024
Viaarxiv icon

RecurrentGemma: Moving Past Transformers for Efficient Open Language Models

Add code
Apr 11, 2024
Figure 1 for RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
Figure 2 for RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
Figure 3 for RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
Figure 4 for RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
Viaarxiv icon

Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts

Add code
Mar 13, 2024
Viaarxiv icon

Online Adaptation of Language Models with a Memory of Amortized Contexts

Add code
Mar 07, 2024
Viaarxiv icon