Picture for Davis Yoshida

Davis Yoshida

Making the Most of your Model: Methods for Finetuning and Applying Pretrained Transformers

Add code
Aug 29, 2024
Viaarxiv icon

Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using LLM Optimizers

Add code
Jun 07, 2024
Figure 1 for Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using LLM Optimizers
Figure 2 for Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using LLM Optimizers
Figure 3 for Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using LLM Optimizers
Figure 4 for Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using LLM Optimizers
Viaarxiv icon

MAP's not dead yet: Uncovering true language model modes by conditioning away degeneracy

Add code
Nov 15, 2023
Viaarxiv icon

NF4 Isn't Information Theoretically Optimal (and that's Good)

Add code
Jun 14, 2023
Viaarxiv icon

Reconsidering the Past: Optimizing Hidden States in Language Models

Add code
Dec 16, 2021
Figure 1 for Reconsidering the Past: Optimizing Hidden States in Language Models
Figure 2 for Reconsidering the Past: Optimizing Hidden States in Language Models
Figure 3 for Reconsidering the Past: Optimizing Hidden States in Language Models
Figure 4 for Reconsidering the Past: Optimizing Hidden States in Language Models
Viaarxiv icon

Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size

Add code
Aug 16, 2020
Figure 1 for Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size
Figure 2 for Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size
Figure 3 for Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size
Figure 4 for Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size
Viaarxiv icon