Picture for Haichuan Yang

Haichuan Yang

University of Rochester

SWaT: Statistical Modeling of Video Watch Time through User Behavior Analysis

Add code
Aug 14, 2024
Figure 1 for SWaT: Statistical Modeling of Video Watch Time through User Behavior Analysis
Figure 2 for SWaT: Statistical Modeling of Video Watch Time through User Behavior Analysis
Figure 3 for SWaT: Statistical Modeling of Video Watch Time through User Behavior Analysis
Figure 4 for SWaT: Statistical Modeling of Video Watch Time through User Behavior Analysis
Viaarxiv icon

TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models

Add code
Sep 05, 2023
Figure 1 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Figure 2 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Figure 3 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Figure 4 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Viaarxiv icon

Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts

Add code
Jun 08, 2023
Viaarxiv icon

LiCo-Net: Linearized Convolution Network for Hardware-efficient Keyword Spotting

Add code
Nov 09, 2022
Figure 1 for LiCo-Net: Linearized Convolution Network for Hardware-efficient Keyword Spotting
Figure 2 for LiCo-Net: Linearized Convolution Network for Hardware-efficient Keyword Spotting
Figure 3 for LiCo-Net: Linearized Convolution Network for Hardware-efficient Keyword Spotting
Figure 4 for LiCo-Net: Linearized Convolution Network for Hardware-efficient Keyword Spotting
Viaarxiv icon

Learning a Dual-Mode Speech Recognition Model via Self-Pruning

Add code
Jul 25, 2022
Figure 1 for Learning a Dual-Mode Speech Recognition Model via Self-Pruning
Figure 2 for Learning a Dual-Mode Speech Recognition Model via Self-Pruning
Figure 3 for Learning a Dual-Mode Speech Recognition Model via Self-Pruning
Figure 4 for Learning a Dual-Mode Speech Recognition Model via Self-Pruning
Viaarxiv icon

DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks

Add code
Jun 02, 2022
Figure 1 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Figure 2 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Figure 3 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Figure 4 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Viaarxiv icon

PyTorchVideo: A Deep Learning Library for Video Understanding

Add code
Nov 18, 2021
Figure 1 for PyTorchVideo: A Deep Learning Library for Video Understanding
Figure 2 for PyTorchVideo: A Deep Learning Library for Video Understanding
Figure 3 for PyTorchVideo: A Deep Learning Library for Video Understanding
Viaarxiv icon

Low-Rank+Sparse Tensor Compression for Neural Networks

Add code
Nov 02, 2021
Figure 1 for Low-Rank+Sparse Tensor Compression for Neural Networks
Figure 2 for Low-Rank+Sparse Tensor Compression for Neural Networks
Figure 3 for Low-Rank+Sparse Tensor Compression for Neural Networks
Figure 4 for Low-Rank+Sparse Tensor Compression for Neural Networks
Viaarxiv icon

Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet

Add code
Oct 15, 2021
Figure 1 for Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet
Figure 2 for Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet
Figure 3 for Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet
Figure 4 for Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet
Viaarxiv icon

Noisy Training Improves E2E ASR for the Edge

Add code
Jul 09, 2021
Figure 1 for Noisy Training Improves E2E ASR for the Edge
Figure 2 for Noisy Training Improves E2E ASR for the Edge
Figure 3 for Noisy Training Improves E2E ASR for the Edge
Figure 4 for Noisy Training Improves E2E ASR for the Edge
Viaarxiv icon