Picture for Zhifeng Lin

Zhifeng Lin

Learning Granularity-Unified Representations for Text-to-Image Person Re-identification

Add code
Jul 16, 2022
Figure 1 for Learning Granularity-Unified Representations for Text-to-Image Person Re-identification
Figure 2 for Learning Granularity-Unified Representations for Text-to-Image Person Re-identification
Figure 3 for Learning Granularity-Unified Representations for Text-to-Image Person Re-identification
Figure 4 for Learning Granularity-Unified Representations for Text-to-Image Person Re-identification
Viaarxiv icon

Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments

Add code
Dec 07, 2019
Figure 1 for Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments
Figure 2 for Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments
Figure 3 for Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments
Figure 4 for Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments
Viaarxiv icon

Train Where the Data is: A Case for Bandwidth Efficient Coded Training

Add code
Oct 22, 2019
Figure 1 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 2 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 3 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 4 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Viaarxiv icon

Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models

Add code
Jun 05, 2019
Figure 1 for Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models
Figure 2 for Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models
Figure 3 for Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models
Figure 4 for Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models
Viaarxiv icon

Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding

Add code
Apr 27, 2019
Figure 1 for Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding
Figure 2 for Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding
Figure 3 for Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding
Figure 4 for Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding
Viaarxiv icon

GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training

Add code
Nov 08, 2018
Figure 1 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 2 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 3 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 4 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Viaarxiv icon