Picture for Jiebo Song

Jiebo Song

Fast and Scalable Semi-Supervised Learning for Multi-View Subspace Clustering

Add code
Aug 11, 2024
Figure 1 for Fast and Scalable Semi-Supervised Learning for Multi-View Subspace Clustering
Figure 2 for Fast and Scalable Semi-Supervised Learning for Multi-View Subspace Clustering
Figure 3 for Fast and Scalable Semi-Supervised Learning for Multi-View Subspace Clustering
Figure 4 for Fast and Scalable Semi-Supervised Learning for Multi-View Subspace Clustering
Viaarxiv icon

PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN Accelerators

Add code
Feb 11, 2020
Figure 1 for PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN Accelerators
Figure 2 for PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN Accelerators
Figure 3 for PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN Accelerators
Figure 4 for PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN Accelerators
Viaarxiv icon

Light-weight Calibrator: a Separable Component for Unsupervised Domain Adaptation

Add code
Nov 28, 2019
Figure 1 for Light-weight Calibrator: a Separable Component for Unsupervised Domain Adaptation
Figure 2 for Light-weight Calibrator: a Separable Component for Unsupervised Domain Adaptation
Figure 3 for Light-weight Calibrator: a Separable Component for Unsupervised Domain Adaptation
Figure 4 for Light-weight Calibrator: a Separable Component for Unsupervised Domain Adaptation
Viaarxiv icon

SCAN: A Scalable Neural Networks Framework Towards Compact and Efficient Models

Add code
May 27, 2019
Figure 1 for SCAN: A Scalable Neural Networks Framework Towards Compact and Efficient Models
Figure 2 for SCAN: A Scalable Neural Networks Framework Towards Compact and Efficient Models
Figure 3 for SCAN: A Scalable Neural Networks Framework Towards Compact and Efficient Models
Figure 4 for SCAN: A Scalable Neural Networks Framework Towards Compact and Efficient Models
Viaarxiv icon

Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation

Add code
May 17, 2019
Figure 1 for Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Figure 2 for Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Figure 3 for Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Figure 4 for Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Viaarxiv icon