Picture for Chi-Ying Tsui

Chi-Ying Tsui

FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization

Add code
Jun 26, 2024
Figure 1 for FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
Figure 2 for FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
Figure 3 for FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
Figure 4 for FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
Viaarxiv icon

How Robust is Federated Learning to Communication Error? A Comparison Study Between Uplink and Downlink Channels

Add code
Oct 25, 2023
Viaarxiv icon

Step-GRAND: A Low Latency Universal Soft-input Decoder

Add code
Jul 27, 2023
Viaarxiv icon

A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications

Add code
Jul 19, 2023
Viaarxiv icon

FedDQ: Communication-Efficient Federated Learning with Descending Quantization

Add code
Oct 13, 2021
Figure 1 for FedDQ: Communication-Efficient Federated Learning with Descending Quantization
Figure 2 for FedDQ: Communication-Efficient Federated Learning with Descending Quantization
Figure 3 for FedDQ: Communication-Efficient Federated Learning with Descending Quantization
Figure 4 for FedDQ: Communication-Efficient Federated Learning with Descending Quantization
Viaarxiv icon

Microshift: An Efficient Image Compression Algorithm for Hardware

Add code
Apr 20, 2021
Figure 1 for Microshift: An Efficient Image Compression Algorithm for Hardware
Figure 2 for Microshift: An Efficient Image Compression Algorithm for Hardware
Figure 3 for Microshift: An Efficient Image Compression Algorithm for Hardware
Figure 4 for Microshift: An Efficient Image Compression Algorithm for Hardware
Viaarxiv icon

Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation

Add code
Apr 03, 2021
Figure 1 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 2 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 3 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 4 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Viaarxiv icon

A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters

Add code
Feb 26, 2021
Figure 1 for A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters
Figure 2 for A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters
Figure 3 for A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters
Figure 4 for A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters
Viaarxiv icon

Polyimide-Based Flexible Coupled-Coils Design and Load-Shift Keying Analysis

Add code
Feb 02, 2021
Figure 1 for Polyimide-Based Flexible Coupled-Coils Design and Load-Shift Keying Analysis
Figure 2 for Polyimide-Based Flexible Coupled-Coils Design and Load-Shift Keying Analysis
Figure 3 for Polyimide-Based Flexible Coupled-Coils Design and Load-Shift Keying Analysis
Figure 4 for Polyimide-Based Flexible Coupled-Coils Design and Load-Shift Keying Analysis
Viaarxiv icon

SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity

Add code
Nov 03, 2017
Figure 1 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Figure 2 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Figure 3 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Figure 4 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Viaarxiv icon