Picture for Razvan Pascanu

Razvan Pascanu

Google DeepMind

Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset

Add code
Nov 06, 2024
Viaarxiv icon

A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks

Add code
Oct 29, 2024
Viaarxiv icon

Retrieval-Augmented Decision Transformer: External Memory for In-context RL

Add code
Oct 09, 2024
Figure 1 for Retrieval-Augmented Decision Transformer: External Memory for In-context RL
Figure 2 for Retrieval-Augmented Decision Transformer: External Memory for In-context RL
Figure 3 for Retrieval-Augmented Decision Transformer: External Memory for In-context RL
Figure 4 for Retrieval-Augmented Decision Transformer: External Memory for In-context RL
Viaarxiv icon

Round and Round We Go! What makes Rotary Positional Encodings useful?

Add code
Oct 08, 2024
Viaarxiv icon

softmax is not enough (for sharp out-of-distribution)

Add code
Oct 01, 2024
Viaarxiv icon

When can transformers compositionally generalize in-context?

Add code
Jul 17, 2024
Viaarxiv icon

Investigating Low-Rank Training in Transformer Language Models: Efficiency and Scaling Analysis

Add code
Jul 13, 2024
Figure 1 for Investigating Low-Rank Training in Transformer Language Models: Efficiency and Scaling Analysis
Figure 2 for Investigating Low-Rank Training in Transformer Language Models: Efficiency and Scaling Analysis
Figure 3 for Investigating Low-Rank Training in Transformer Language Models: Efficiency and Scaling Analysis
Figure 4 for Investigating Low-Rank Training in Transformer Language Models: Efficiency and Scaling Analysis
Viaarxiv icon

Normalization and effective learning rates in reinforcement learning

Add code
Jul 01, 2024
Viaarxiv icon

Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers

Add code
Jun 24, 2024
Viaarxiv icon

Transformers meet Neural Algorithmic Reasoners

Add code
Jun 13, 2024
Viaarxiv icon