Picture for Jianwen Jiang

Jianwen Jiang

Loopy: Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency

Add code
Sep 04, 2024
Viaarxiv icon

CyberHost: Taming Audio-driven Avatar Diffusion Model with Region Codebook Attention

Add code
Sep 03, 2024
Viaarxiv icon

MobilePortrait: Real-Time One-Shot Neural Head Avatars on Mobile Devices

Add code
Jul 08, 2024
Viaarxiv icon

Superior and Pragmatic Talking Face Generation with Teacher-Student Framework

Add code
Mar 26, 2024
Viaarxiv icon

RLIPv2: Fast Scaling of Relational Language-Image Pre-training

Add code
Aug 18, 2023
Viaarxiv icon

ViM: Vision Middleware for Unified Downstream Transferring

Add code
Mar 13, 2023
Viaarxiv icon

VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval

Add code
Nov 23, 2022
Viaarxiv icon

Grow and Merge: A Unified Framework for Continuous Categories Discovery

Add code
Oct 09, 2022
Figure 1 for Grow and Merge: A Unified Framework for Continuous Categories Discovery
Figure 2 for Grow and Merge: A Unified Framework for Continuous Categories Discovery
Figure 3 for Grow and Merge: A Unified Framework for Continuous Categories Discovery
Figure 4 for Grow and Merge: A Unified Framework for Continuous Categories Discovery
Viaarxiv icon

RLIP: Relational Language-Image Pre-training for Human-Object Interaction Detection

Add code
Sep 05, 2022
Figure 1 for RLIP: Relational Language-Image Pre-training for Human-Object Interaction Detection
Figure 2 for RLIP: Relational Language-Image Pre-training for Human-Object Interaction Detection
Figure 3 for RLIP: Relational Language-Image Pre-training for Human-Object Interaction Detection
Figure 4 for RLIP: Relational Language-Image Pre-training for Human-Object Interaction Detection
Viaarxiv icon

Rethinking supervised pre-training for better downstream transferring

Add code
Oct 12, 2021
Figure 1 for Rethinking supervised pre-training for better downstream transferring
Figure 2 for Rethinking supervised pre-training for better downstream transferring
Figure 3 for Rethinking supervised pre-training for better downstream transferring
Figure 4 for Rethinking supervised pre-training for better downstream transferring
Viaarxiv icon