Picture for Rami Al-Rfou

Rami Al-Rfou

MoST: Multi-modality Scene Tokenization for Motion Prediction

Add code
Apr 30, 2024
Figure 1 for MoST: Multi-modality Scene Tokenization for Motion Prediction
Figure 2 for MoST: Multi-modality Scene Tokenization for Motion Prediction
Figure 3 for MoST: Multi-modality Scene Tokenization for Motion Prediction
Figure 4 for MoST: Multi-modality Scene Tokenization for Motion Prediction
Viaarxiv icon

Scaling Motion Forecasting Models with Ensemble Distillation

Add code
Apr 05, 2024
Viaarxiv icon

Let Your Graph Do the Talking: Encoding Structured Data for LLMs

Add code
Feb 08, 2024
Viaarxiv icon

MotionLM: Multi-Agent Motion Forecasting as Language Modeling

Add code
Sep 28, 2023
Viaarxiv icon

Fine-Tashkeel: Finetuning Byte-Level Models for Accurate Arabic Text Diacritization

Add code
Mar 25, 2023
Viaarxiv icon

Wayformer: Motion Forecasting via Simple & Efficient Attention Networks

Add code
Jul 12, 2022
Figure 1 for Wayformer: Motion Forecasting via Simple & Efficient Attention Networks
Figure 2 for Wayformer: Motion Forecasting via Simple & Efficient Attention Networks
Figure 3 for Wayformer: Motion Forecasting via Simple & Efficient Attention Networks
Figure 4 for Wayformer: Motion Forecasting via Simple & Efficient Attention Networks
Viaarxiv icon

Narrowing the Coordinate-frame Gap in Behavior Prediction Models: Distillation for Efficient and Accurate Scene-centric Motion Forecasting

Add code
Jun 10, 2022
Figure 1 for Narrowing the Coordinate-frame Gap in Behavior Prediction Models: Distillation for Efficient and Accurate Scene-centric Motion Forecasting
Figure 2 for Narrowing the Coordinate-frame Gap in Behavior Prediction Models: Distillation for Efficient and Accurate Scene-centric Motion Forecasting
Figure 3 for Narrowing the Coordinate-frame Gap in Behavior Prediction Models: Distillation for Efficient and Accurate Scene-centric Motion Forecasting
Figure 4 for Narrowing the Coordinate-frame Gap in Behavior Prediction Models: Distillation for Efficient and Accurate Scene-centric Motion Forecasting
Viaarxiv icon

VN-Transformer: Rotation-Equivariant Attention for Vector Neurons

Add code
Jun 08, 2022
Figure 1 for VN-Transformer: Rotation-Equivariant Attention for Vector Neurons
Figure 2 for VN-Transformer: Rotation-Equivariant Attention for Vector Neurons
Figure 3 for VN-Transformer: Rotation-Equivariant Attention for Vector Neurons
Figure 4 for VN-Transformer: Rotation-Equivariant Attention for Vector Neurons
Viaarxiv icon

SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

Add code
Oct 15, 2021
Figure 1 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Figure 2 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Figure 3 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Figure 4 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Viaarxiv icon

nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?

Add code
Jun 03, 2021
Figure 1 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Figure 2 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Figure 3 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Figure 4 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Viaarxiv icon