Picture for Alessandro Capotondi

Alessandro Capotondi

Bidirectional Awareness Induction in Autoregressive Seq2Seq Models

Add code
Aug 25, 2024
Figure 1 for Bidirectional Awareness Induction in Autoregressive Seq2Seq Models
Figure 2 for Bidirectional Awareness Induction in Autoregressive Seq2Seq Models
Figure 3 for Bidirectional Awareness Induction in Autoregressive Seq2Seq Models
Figure 4 for Bidirectional Awareness Induction in Autoregressive Seq2Seq Models
Viaarxiv icon

Shifted Window Fourier Transform And Retention For Image Captioning

Add code
Aug 25, 2024
Viaarxiv icon

Heterogeneous Encoders Scaling In The Transformer For Neural Machine Translation

Add code
Dec 26, 2023
Viaarxiv icon

A request for clarity over the End of Sequence token in the Self-Critical Sequence Training

Add code
May 20, 2023
Viaarxiv icon

ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning

Add code
Aug 19, 2022
Figure 1 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Figure 2 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Figure 3 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Figure 4 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Viaarxiv icon

A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays

Add code
Oct 20, 2021
Figure 1 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Figure 2 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Figure 3 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Figure 4 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Viaarxiv icon

Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers

Add code
Aug 12, 2020
Figure 1 for Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers
Figure 2 for Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers
Figure 3 for Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers
Figure 4 for Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers
Viaarxiv icon

Robust navigation with tinyML for autonomous mini-vehicles

Add code
Jul 01, 2020
Figure 1 for Robust navigation with tinyML for autonomous mini-vehicles
Figure 2 for Robust navigation with tinyML for autonomous mini-vehicles
Figure 3 for Robust navigation with tinyML for autonomous mini-vehicles
Figure 4 for Robust navigation with tinyML for autonomous mini-vehicles
Viaarxiv icon

Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers

Add code
May 30, 2019
Figure 1 for Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Figure 2 for Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Figure 3 for Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Figure 4 for Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Viaarxiv icon

NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs

Add code
Dec 04, 2017
Figure 1 for NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs
Figure 2 for NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs
Figure 3 for NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs
Figure 4 for NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs
Viaarxiv icon