Picture for Antonio Norelli

Antonio Norelli

Shammie

Artificial Scientific Discovery

Add code
Nov 18, 2024
Viaarxiv icon

DyGMamba: Efficiently Modeling Long-Term Temporal Dependency on Continuous-Time Dynamic Graphs with State Space Models

Add code
Aug 08, 2024
Figure 1 for DyGMamba: Efficiently Modeling Long-Term Temporal Dependency on Continuous-Time Dynamic Graphs with State Space Models
Figure 2 for DyGMamba: Efficiently Modeling Long-Term Temporal Dependency on Continuous-Time Dynamic Graphs with State Space Models
Figure 3 for DyGMamba: Efficiently Modeling Long-Term Temporal Dependency on Continuous-Time Dynamic Graphs with State Space Models
Figure 4 for DyGMamba: Efficiently Modeling Long-Term Temporal Dependency on Continuous-Time Dynamic Graphs with State Space Models
Viaarxiv icon

Latent Space Translation via Semantic Alignment

Add code
Nov 01, 2023
Figure 1 for Latent Space Translation via Semantic Alignment
Figure 2 for Latent Space Translation via Semantic Alignment
Figure 3 for Latent Space Translation via Semantic Alignment
Figure 4 for Latent Space Translation via Semantic Alignment
Viaarxiv icon

Bootstrapping Parallel Anchors for Relative Representations

Add code
Mar 01, 2023
Figure 1 for Bootstrapping Parallel Anchors for Relative Representations
Figure 2 for Bootstrapping Parallel Anchors for Relative Representations
Figure 3 for Bootstrapping Parallel Anchors for Relative Representations
Figure 4 for Bootstrapping Parallel Anchors for Relative Representations
Viaarxiv icon

ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training

Add code
Oct 04, 2022
Figure 1 for ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training
Figure 2 for ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training
Figure 3 for ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training
Figure 4 for ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training
Viaarxiv icon

Relative representations enable zero-shot latent space communication

Add code
Sep 30, 2022
Figure 1 for Relative representations enable zero-shot latent space communication
Figure 2 for Relative representations enable zero-shot latent space communication
Figure 3 for Relative representations enable zero-shot latent space communication
Figure 4 for Relative representations enable zero-shot latent space communication
Viaarxiv icon

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Add code
Jun 10, 2022
Viaarxiv icon

Explanatory Learning: Beyond Empiricism in Neural Networks

Add code
Jan 25, 2022
Figure 1 for Explanatory Learning: Beyond Empiricism in Neural Networks
Figure 2 for Explanatory Learning: Beyond Empiricism in Neural Networks
Figure 3 for Explanatory Learning: Beyond Empiricism in Neural Networks
Figure 4 for Explanatory Learning: Beyond Empiricism in Neural Networks
Viaarxiv icon

OLIVAW: Mastering Othello with neither Humans nor a Penny

Add code
Mar 31, 2021
Figure 1 for OLIVAW: Mastering Othello with neither Humans nor a Penny
Figure 2 for OLIVAW: Mastering Othello with neither Humans nor a Penny
Figure 3 for OLIVAW: Mastering Othello with neither Humans nor a Penny
Figure 4 for OLIVAW: Mastering Othello with neither Humans nor a Penny
Viaarxiv icon

LIMP: Learning Latent Shape Representations with Metric Preservation Priors

Add code
Mar 27, 2020
Figure 1 for LIMP: Learning Latent Shape Representations with Metric Preservation Priors
Figure 2 for LIMP: Learning Latent Shape Representations with Metric Preservation Priors
Figure 3 for LIMP: Learning Latent Shape Representations with Metric Preservation Priors
Figure 4 for LIMP: Learning Latent Shape Representations with Metric Preservation Priors
Viaarxiv icon