Picture for Noah Constant

Noah Constant

Shammie

Training LLMs over Neurally Compressed Text

Add code
Apr 04, 2024
Figure 1 for Training LLMs over Neurally Compressed Text
Figure 2 for Training LLMs over Neurally Compressed Text
Figure 3 for Training LLMs over Neurally Compressed Text
Figure 4 for Training LLMs over Neurally Compressed Text
Viaarxiv icon

Transfer Learning for Text Diffusion Models

Add code
Jan 30, 2024
Viaarxiv icon

Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models

Add code
Dec 22, 2023
Viaarxiv icon

Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?

Add code
Nov 15, 2023
Figure 1 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 2 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 3 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 4 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Viaarxiv icon

FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation

Add code
Oct 05, 2023
Figure 1 for FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Figure 2 for FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Figure 3 for FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Figure 4 for FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Viaarxiv icon

UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining

Add code
Apr 18, 2023
Figure 1 for UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining
Figure 2 for UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining
Figure 3 for UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining
Figure 4 for UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining
Viaarxiv icon

Character-Aware Models Improve Visual Text Rendering

Add code
Dec 20, 2022
Figure 1 for Character-Aware Models Improve Visual Text Rendering
Figure 2 for Character-Aware Models Improve Visual Text Rendering
Figure 3 for Character-Aware Models Improve Visual Text Rendering
Figure 4 for Character-Aware Models Improve Visual Text Rendering
Viaarxiv icon

FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation

Add code
Oct 01, 2022
Figure 1 for FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation
Figure 2 for FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation
Figure 3 for FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation
Figure 4 for FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation
Viaarxiv icon

Bidirectional Language Models Are Also Few-shot Learners

Add code
Sep 29, 2022
Figure 1 for Bidirectional Language Models Are Also Few-shot Learners
Figure 2 for Bidirectional Language Models Are Also Few-shot Learners
Figure 3 for Bidirectional Language Models Are Also Few-shot Learners
Figure 4 for Bidirectional Language Models Are Also Few-shot Learners
Viaarxiv icon

Reducing Retraining by Recycling Parameter-Efficient Prompts

Add code
Aug 10, 2022
Figure 1 for Reducing Retraining by Recycling Parameter-Efficient Prompts
Figure 2 for Reducing Retraining by Recycling Parameter-Efficient Prompts
Figure 3 for Reducing Retraining by Recycling Parameter-Efficient Prompts
Figure 4 for Reducing Retraining by Recycling Parameter-Efficient Prompts
Viaarxiv icon