Picture for Hongxing Gao

Hongxing Gao

Condensation-Net: Memory-Efficient Network Architecture with Cross-Channel Pooling Layers and Virtual Feature Maps

Add code
Apr 29, 2021
Figure 1 for Condensation-Net: Memory-Efficient Network Architecture with Cross-Channel Pooling Layers and Virtual Feature Maps
Figure 2 for Condensation-Net: Memory-Efficient Network Architecture with Cross-Channel Pooling Layers and Virtual Feature Maps
Figure 3 for Condensation-Net: Memory-Efficient Network Architecture with Cross-Channel Pooling Layers and Virtual Feature Maps
Figure 4 for Condensation-Net: Memory-Efficient Network Architecture with Cross-Channel Pooling Layers and Virtual Feature Maps
Viaarxiv icon

IFQ-Net: Integrated Fixed-point Quantization Networks for Embedded Vision

Add code
Nov 19, 2019
Figure 1 for IFQ-Net: Integrated Fixed-point Quantization Networks for Embedded Vision
Figure 2 for IFQ-Net: Integrated Fixed-point Quantization Networks for Embedded Vision
Figure 3 for IFQ-Net: Integrated Fixed-point Quantization Networks for Embedded Vision
Figure 4 for IFQ-Net: Integrated Fixed-point Quantization Networks for Embedded Vision
Viaarxiv icon

DupNet: Towards Very Tiny Quantized CNN with Improved Accuracy for Face Detection

Add code
Nov 13, 2019
Figure 1 for DupNet: Towards Very Tiny Quantized CNN with Improved Accuracy for Face Detection
Figure 2 for DupNet: Towards Very Tiny Quantized CNN with Improved Accuracy for Face Detection
Figure 3 for DupNet: Towards Very Tiny Quantized CNN with Improved Accuracy for Face Detection
Figure 4 for DupNet: Towards Very Tiny Quantized CNN with Improved Accuracy for Face Detection
Viaarxiv icon

Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation

Add code
Nov 13, 2019
Figure 1 for Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation
Figure 2 for Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation
Figure 3 for Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation
Figure 4 for Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation
Viaarxiv icon