Picture for Andrew Dai

Andrew Dai

Shammie

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon

Quality-Diversity through AI Feedback

Add code
Oct 31, 2023
Viaarxiv icon

Brainformers: Trading Simplicity for Efficiency

Add code
May 29, 2023
Viaarxiv icon

MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation

Add code
May 24, 2023
Viaarxiv icon

MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks

Add code
Mar 30, 2023
Viaarxiv icon

M-VADER: A Model for Diffusion with Multimodal Context

Add code
Dec 07, 2022
Viaarxiv icon

Scaling Instruction-Finetuned Language Models

Add code
Oct 20, 2022
Figure 1 for Scaling Instruction-Finetuned Language Models
Figure 2 for Scaling Instruction-Finetuned Language Models
Figure 3 for Scaling Instruction-Finetuned Language Models
Figure 4 for Scaling Instruction-Finetuned Language Models
Viaarxiv icon

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Add code
Jun 10, 2022
Viaarxiv icon

Mixture-of-Experts with Expert Choice Routing

Add code
Feb 18, 2022
Figure 1 for Mixture-of-Experts with Expert Choice Routing
Figure 2 for Mixture-of-Experts with Expert Choice Routing
Figure 3 for Mixture-of-Experts with Expert Choice Routing
Figure 4 for Mixture-of-Experts with Expert Choice Routing
Viaarxiv icon

Deep Physiological State Space Model for Clinical Forecasting

Add code
Dec 04, 2019
Figure 1 for Deep Physiological State Space Model for Clinical Forecasting
Figure 2 for Deep Physiological State Space Model for Clinical Forecasting
Viaarxiv icon