Picture for Chunhui Liu

Chunhui Liu

LaT: Latent Translation with Cycle-Consistency for Video-Text Retrieval

Add code
Jul 11, 2022
Figure 1 for LaT: Latent Translation with Cycle-Consistency for Video-Text Retrieval
Figure 2 for LaT: Latent Translation with Cycle-Consistency for Video-Text Retrieval
Figure 3 for LaT: Latent Translation with Cycle-Consistency for Video-Text Retrieval
Figure 4 for LaT: Latent Translation with Cycle-Consistency for Video-Text Retrieval
Viaarxiv icon

Unsupervised Action Segmentation with Self-supervised Feature Learning and Co-occurrence Parsing

Add code
Jun 02, 2021
Figure 1 for Unsupervised Action Segmentation with Self-supervised Feature Learning and Co-occurrence Parsing
Figure 2 for Unsupervised Action Segmentation with Self-supervised Feature Learning and Co-occurrence Parsing
Figure 3 for Unsupervised Action Segmentation with Self-supervised Feature Learning and Co-occurrence Parsing
Figure 4 for Unsupervised Action Segmentation with Self-supervised Feature Learning and Co-occurrence Parsing
Viaarxiv icon

VidTr: Video Transformer Without Convolutions

Add code
Apr 23, 2021
Figure 1 for VidTr: Video Transformer Without Convolutions
Figure 2 for VidTr: Video Transformer Without Convolutions
Figure 3 for VidTr: Video Transformer Without Convolutions
Figure 4 for VidTr: Video Transformer Without Convolutions
Viaarxiv icon

TubeR: Tube-Transformer for Action Detection

Add code
Apr 09, 2021
Figure 1 for TubeR: Tube-Transformer for Action Detection
Figure 2 for TubeR: Tube-Transformer for Action Detection
Figure 3 for TubeR: Tube-Transformer for Action Detection
Figure 4 for TubeR: Tube-Transformer for Action Detection
Viaarxiv icon

Selective Feature Compression for Efficient Activity Recognition Inference

Add code
Apr 01, 2021
Figure 1 for Selective Feature Compression for Efficient Activity Recognition Inference
Figure 2 for Selective Feature Compression for Efficient Activity Recognition Inference
Figure 3 for Selective Feature Compression for Efficient Activity Recognition Inference
Figure 4 for Selective Feature Compression for Efficient Activity Recognition Inference
Viaarxiv icon

NUTA: Non-uniform Temporal Aggregation for Action Recognition

Add code
Dec 15, 2020
Figure 1 for NUTA: Non-uniform Temporal Aggregation for Action Recognition
Figure 2 for NUTA: Non-uniform Temporal Aggregation for Action Recognition
Figure 3 for NUTA: Non-uniform Temporal Aggregation for Action Recognition
Figure 4 for NUTA: Non-uniform Temporal Aggregation for Action Recognition
Viaarxiv icon

A Comprehensive Study of Deep Video Action Recognition

Add code
Dec 11, 2020
Figure 1 for A Comprehensive Study of Deep Video Action Recognition
Figure 2 for A Comprehensive Study of Deep Video Action Recognition
Figure 3 for A Comprehensive Study of Deep Video Action Recognition
Figure 4 for A Comprehensive Study of Deep Video Action Recognition
Viaarxiv icon

Triplet Online Instance Matching Loss for Person Re-identification

Add code
Feb 24, 2020
Figure 1 for Triplet Online Instance Matching Loss for Person Re-identification
Figure 2 for Triplet Online Instance Matching Loss for Person Re-identification
Figure 3 for Triplet Online Instance Matching Loss for Person Re-identification
Figure 4 for Triplet Online Instance Matching Loss for Person Re-identification
Viaarxiv icon

Patch Correspondences for Interpreting Pixel-level CNNs

Add code
Sep 04, 2018
Figure 1 for Patch Correspondences for Interpreting Pixel-level CNNs
Figure 2 for Patch Correspondences for Interpreting Pixel-level CNNs
Figure 3 for Patch Correspondences for Interpreting Pixel-level CNNs
Figure 4 for Patch Correspondences for Interpreting Pixel-level CNNs
Viaarxiv icon

PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action Understanding

Add code
Mar 28, 2017
Figure 1 for PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action Understanding
Figure 2 for PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action Understanding
Figure 3 for PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action Understanding
Figure 4 for PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action Understanding
Viaarxiv icon