Picture for Haoyi Zhou

Haoyi Zhou

Building Flexible Machine Learning Models for Scientific Computing at Scale

Add code
Feb 25, 2024
Viaarxiv icon

PhoGAD: Graph-based Anomaly Behavior Detection with Persistent Homology Optimization

Add code
Jan 19, 2024
Viaarxiv icon

Learning Music Sequence Representation from Text Supervision

Add code
May 31, 2023
Viaarxiv icon

Task-Specific Expert Pruning for Sparse Mixture-of-Experts

Add code
Jun 02, 2022
Figure 1 for Task-Specific Expert Pruning for Sparse Mixture-of-Experts
Figure 2 for Task-Specific Expert Pruning for Sparse Mixture-of-Experts
Figure 3 for Task-Specific Expert Pruning for Sparse Mixture-of-Experts
Figure 4 for Task-Specific Expert Pruning for Sparse Mixture-of-Experts
Viaarxiv icon

THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption

Add code
Jun 02, 2022
Viaarxiv icon

Cross-Domain Object Detection with Mean-Teacher Transformer

Add code
May 03, 2022
Figure 1 for Cross-Domain Object Detection with Mean-Teacher Transformer
Figure 2 for Cross-Domain Object Detection with Mean-Teacher Transformer
Figure 3 for Cross-Domain Object Detection with Mean-Teacher Transformer
Figure 4 for Cross-Domain Object Detection with Mean-Teacher Transformer
Viaarxiv icon

RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models

Add code
Jun 07, 2021
Figure 1 for RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models
Figure 2 for RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models
Figure 3 for RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models
Figure 4 for RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models
Viaarxiv icon

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Add code
Dec 17, 2020
Figure 1 for Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
Figure 2 for Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
Figure 3 for Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
Figure 4 for Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
Viaarxiv icon

Differentially-private Federated Neural Architecture Search

Add code
Jun 22, 2020
Figure 1 for Differentially-private Federated Neural Architecture Search
Figure 2 for Differentially-private Federated Neural Architecture Search
Viaarxiv icon

Stacked Kernel Network

Add code
Nov 25, 2017
Figure 1 for Stacked Kernel Network
Figure 2 for Stacked Kernel Network
Figure 3 for Stacked Kernel Network
Figure 4 for Stacked Kernel Network
Viaarxiv icon