Picture for Michael Pieler

Michael Pieler

Rephrasing natural text data with different languages and quality levels for Large Language Model pre-training

Add code
Oct 28, 2024
Viaarxiv icon

Are large language models superhuman chemists?

Add code
Apr 01, 2024
Viaarxiv icon

Stable LM 2 1.6B Technical Report

Add code
Feb 27, 2024
Viaarxiv icon

Inverse Scaling: When Bigger Isn't Better

Add code
Jun 15, 2023
Viaarxiv icon

14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon

Add code
Jun 13, 2023
Viaarxiv icon

Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning

Add code
Oct 14, 2022
Figure 1 for Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning
Figure 2 for Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning
Figure 3 for Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning
Figure 4 for Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning
Viaarxiv icon

Few-shot Adaptation Works with UnpredicTable Data

Add code
Aug 08, 2022
Figure 1 for Few-shot Adaptation Works with UnpredicTable Data
Figure 2 for Few-shot Adaptation Works with UnpredicTable Data
Figure 3 for Few-shot Adaptation Works with UnpredicTable Data
Figure 4 for Few-shot Adaptation Works with UnpredicTable Data
Viaarxiv icon

GPT-NeoX-20B: An Open-Source Autoregressive Language Model

Add code
Apr 14, 2022
Figure 1 for GPT-NeoX-20B: An Open-Source Autoregressive Language Model
Figure 2 for GPT-NeoX-20B: An Open-Source Autoregressive Language Model
Figure 3 for GPT-NeoX-20B: An Open-Source Autoregressive Language Model
Figure 4 for GPT-NeoX-20B: An Open-Source Autoregressive Language Model
Viaarxiv icon