Picture for Xuehai Qian

Xuehai Qian

Fine-Grained Embedding Dimension Optimization During Training for Recommender Systems

Add code
Jan 09, 2024
Viaarxiv icon

RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training

Add code
Nov 27, 2023
Figure 1 for RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training
Figure 2 for RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training
Figure 3 for RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training
Figure 4 for RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training
Viaarxiv icon

GNNPipe: Accelerating Distributed Full-Graph GNN Training with Pipelined Model Parallelism

Add code
Aug 19, 2023
Viaarxiv icon

QuEst: Graph Transformer for Quantum Circuit Reliability Estimation

Add code
Oct 30, 2022
Viaarxiv icon

PAN: Pulse Ansatz on NISQ Machines

Add code
Aug 02, 2022
Figure 1 for PAN: Pulse Ansatz on NISQ Machines
Figure 2 for PAN: Pulse Ansatz on NISQ Machines
Figure 3 for PAN: Pulse Ansatz on NISQ Machines
Figure 4 for PAN: Pulse Ansatz on NISQ Machines
Viaarxiv icon

GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices based on Fine-Grained Structured Weight Sparsity

Add code
Aug 25, 2021
Figure 1 for GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices based on Fine-Grained Structured Weight Sparsity
Figure 2 for GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices based on Fine-Grained Structured Weight Sparsity
Figure 3 for GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices based on Fine-Grained Structured Weight Sparsity
Figure 4 for GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices based on Fine-Grained Structured Weight Sparsity
Viaarxiv icon

FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator

Add code
Jun 16, 2021
Figure 1 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 2 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 3 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 4 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Viaarxiv icon

HASCO: Towards Agile HArdware and Software CO-design for Tensor Computation

Add code
May 04, 2021
Figure 1 for HASCO: Towards Agile HArdware and Software CO-design for Tensor Computation
Figure 2 for HASCO: Towards Agile HArdware and Software CO-design for Tensor Computation
Figure 3 for HASCO: Towards Agile HArdware and Software CO-design for Tensor Computation
Figure 4 for HASCO: Towards Agile HArdware and Software CO-design for Tensor Computation
Viaarxiv icon

Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework

Add code
Dec 12, 2020
Figure 1 for Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework
Figure 2 for Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework
Figure 3 for Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework
Figure 4 for Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework
Viaarxiv icon

PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal Matrices

Add code
Apr 23, 2020
Figure 1 for PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal Matrices
Figure 2 for PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal Matrices
Figure 3 for PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal Matrices
Figure 4 for PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal Matrices
Viaarxiv icon