Picture for Gurshaant Malik

Gurshaant Malik

Language Modeling using LMUs: 10x Better Data Efficiency or Improved Scaling Compared to Transformers

Add code
Oct 05, 2021
Figure 1 for Language Modeling using LMUs: 10x Better Data Efficiency or Improved Scaling Compared to Transformers
Figure 2 for Language Modeling using LMUs: 10x Better Data Efficiency or Improved Scaling Compared to Transformers
Figure 3 for Language Modeling using LMUs: 10x Better Data Efficiency or Improved Scaling Compared to Transformers
Figure 4 for Language Modeling using LMUs: 10x Better Data Efficiency or Improved Scaling Compared to Transformers
Viaarxiv icon

Hardware Aware Training for Efficient Keyword Spotting on General Purpose and Specialized Hardware

Add code
Sep 23, 2020
Figure 1 for Hardware Aware Training for Efficient Keyword Spotting on General Purpose and Specialized Hardware
Figure 2 for Hardware Aware Training for Efficient Keyword Spotting on General Purpose and Specialized Hardware
Figure 3 for Hardware Aware Training for Efficient Keyword Spotting on General Purpose and Specialized Hardware
Figure 4 for Hardware Aware Training for Efficient Keyword Spotting on General Purpose and Specialized Hardware
Viaarxiv icon

FPGA based hybrid architecture for parallelizing RRT

Add code
Jul 19, 2016
Figure 1 for FPGA based hybrid architecture for parallelizing RRT
Figure 2 for FPGA based hybrid architecture for parallelizing RRT
Figure 3 for FPGA based hybrid architecture for parallelizing RRT
Figure 4 for FPGA based hybrid architecture for parallelizing RRT
Viaarxiv icon