Picture for Xizi Chen

Xizi Chen

A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications

Add code
Jul 19, 2023
Viaarxiv icon

Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation

Add code
Apr 03, 2021
Figure 1 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 2 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 3 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 4 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Viaarxiv icon

A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters

Add code
Feb 26, 2021
Figure 1 for A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters
Figure 2 for A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters
Figure 3 for A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters
Figure 4 for A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters
Viaarxiv icon

SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity

Add code
Nov 03, 2017
Figure 1 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Figure 2 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Figure 3 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Figure 4 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Viaarxiv icon