Picture for Liu Ke

Liu Ke

Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference

Add code
Mar 10, 2023
Figure 1 for Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference
Figure 2 for Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference
Figure 3 for Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference
Figure 4 for Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference
Viaarxiv icon

Data Leakage via Access Patterns of Sparse Features in Deep Learning-based Recommendation Systems

Add code
Dec 12, 2022
Viaarxiv icon

Neural Network-Inspired Analog-to-Digital Conversion to Achieve Super-Resolution with Low-Precision RRAM Devices

Add code
Nov 28, 2019
Figure 1 for Neural Network-Inspired Analog-to-Digital Conversion to Achieve Super-Resolution with Low-Precision RRAM Devices
Figure 2 for Neural Network-Inspired Analog-to-Digital Conversion to Achieve Super-Resolution with Low-Precision RRAM Devices
Figure 3 for Neural Network-Inspired Analog-to-Digital Conversion to Achieve Super-Resolution with Low-Precision RRAM Devices
Figure 4 for Neural Network-Inspired Analog-to-Digital Conversion to Achieve Super-Resolution with Low-Precision RRAM Devices
Viaarxiv icon

AxTrain: Hardware-Oriented Neural Network Training for Approximate Inference

Add code
May 21, 2018
Figure 1 for AxTrain: Hardware-Oriented Neural Network Training for Approximate Inference
Figure 2 for AxTrain: Hardware-Oriented Neural Network Training for Approximate Inference
Figure 3 for AxTrain: Hardware-Oriented Neural Network Training for Approximate Inference
Figure 4 for AxTrain: Hardware-Oriented Neural Network Training for Approximate Inference
Viaarxiv icon