Picture for Xiaoyao Liang

Xiaoyao Liang

MEGA: A Memory-Efficient GNN Accelerator Exploiting Degree-Aware Mixed-Precision Quantization

Add code
Nov 16, 2023
Viaarxiv icon

$\rm A^2Q$: Aggregation-Aware Quantization for Graph Neural Networks

Add code
Feb 01, 2023
Viaarxiv icon

BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture

Add code
Sep 30, 2022
Figure 1 for BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture
Figure 2 for BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture
Figure 3 for BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture
Figure 4 for BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture
Viaarxiv icon

DNN Training Acceleration via Exploring GPGPU Friendly Sparsity

Add code
Mar 11, 2022
Figure 1 for DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Figure 2 for DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Figure 3 for DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Figure 4 for DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Viaarxiv icon

CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction

Add code
Mar 09, 2022
Figure 1 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Figure 2 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Figure 3 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Figure 4 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Viaarxiv icon

N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores

Add code
Dec 15, 2021
Figure 1 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Figure 2 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Figure 3 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Figure 4 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Viaarxiv icon

SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network

Add code
Mar 02, 2021
Figure 1 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Figure 2 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Figure 3 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Figure 4 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Viaarxiv icon

Invocation-driven Neural Approximate Computing with a Multiclass-Classifier and Multiple Approximators

Add code
Oct 19, 2018
Figure 1 for Invocation-driven Neural Approximate Computing with a Multiclass-Classifier and Multiple Approximators
Figure 2 for Invocation-driven Neural Approximate Computing with a Multiclass-Classifier and Multiple Approximators
Figure 3 for Invocation-driven Neural Approximate Computing with a Multiclass-Classifier and Multiple Approximators
Figure 4 for Invocation-driven Neural Approximate Computing with a Multiclass-Classifier and Multiple Approximators
Viaarxiv icon

AXNet: ApproXimate computing using an end-to-end trainable neural network

Add code
Jul 27, 2018
Figure 1 for AXNet: ApproXimate computing using an end-to-end trainable neural network
Figure 2 for AXNet: ApproXimate computing using an end-to-end trainable neural network
Figure 3 for AXNet: ApproXimate computing using an end-to-end trainable neural network
Figure 4 for AXNet: ApproXimate computing using an end-to-end trainable neural network
Viaarxiv icon

Approximate Random Dropout

Add code
May 23, 2018
Figure 1 for Approximate Random Dropout
Figure 2 for Approximate Random Dropout
Figure 3 for Approximate Random Dropout
Figure 4 for Approximate Random Dropout
Viaarxiv icon