Picture for Amelia Glaese

Amelia Glaese

Measuring short-form factuality in large language models

Add code
Nov 07, 2024
Figure 1 for Measuring short-form factuality in large language models
Figure 2 for Measuring short-form factuality in large language models
Figure 3 for Measuring short-form factuality in large language models
Figure 4 for Measuring short-form factuality in large language models
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Fine-tuning language models to find agreement among humans with diverse preferences

Add code
Nov 28, 2022
Figure 1 for Fine-tuning language models to find agreement among humans with diverse preferences
Figure 2 for Fine-tuning language models to find agreement among humans with diverse preferences
Figure 3 for Fine-tuning language models to find agreement among humans with diverse preferences
Figure 4 for Fine-tuning language models to find agreement among humans with diverse preferences
Viaarxiv icon

Improving alignment of dialogue agents via targeted human judgements

Add code
Sep 28, 2022
Figure 1 for Improving alignment of dialogue agents via targeted human judgements
Figure 2 for Improving alignment of dialogue agents via targeted human judgements
Figure 3 for Improving alignment of dialogue agents via targeted human judgements
Figure 4 for Improving alignment of dialogue agents via targeted human judgements
Viaarxiv icon

Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models

Add code
Jun 16, 2022
Figure 1 for Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Figure 2 for Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Figure 3 for Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Viaarxiv icon

Learning how to Interact with a Complex Interface using Hierarchical Reinforcement Learning

Add code
Apr 21, 2022
Figure 1 for Learning how to Interact with a Complex Interface using Hierarchical Reinforcement Learning
Figure 2 for Learning how to Interact with a Complex Interface using Hierarchical Reinforcement Learning
Figure 3 for Learning how to Interact with a Complex Interface using Hierarchical Reinforcement Learning
Figure 4 for Learning how to Interact with a Complex Interface using Hierarchical Reinforcement Learning
Viaarxiv icon

Red Teaming Language Models with Language Models

Add code
Feb 07, 2022
Figure 1 for Red Teaming Language Models with Language Models
Figure 2 for Red Teaming Language Models with Language Models
Figure 3 for Red Teaming Language Models with Language Models
Figure 4 for Red Teaming Language Models with Language Models
Viaarxiv icon

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

Add code
Dec 08, 2021
Figure 1 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 2 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 3 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 4 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Viaarxiv icon

Challenges in Detoxifying Language Models

Add code
Sep 15, 2021
Figure 1 for Challenges in Detoxifying Language Models
Figure 2 for Challenges in Detoxifying Language Models
Figure 3 for Challenges in Detoxifying Language Models
Figure 4 for Challenges in Detoxifying Language Models
Viaarxiv icon

AndroidEnv: A Reinforcement Learning Platform for Android

Add code
May 27, 2021
Figure 1 for AndroidEnv: A Reinforcement Learning Platform for Android
Figure 2 for AndroidEnv: A Reinforcement Learning Platform for Android
Figure 3 for AndroidEnv: A Reinforcement Learning Platform for Android
Figure 4 for AndroidEnv: A Reinforcement Learning Platform for Android
Viaarxiv icon