n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization

Add code
Mar 22, 2021
Figure 1 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 2 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 3 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 4 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: