Picture for Yang Zhou

Yang Zhou

Yahoo! Labs

MotionBridge: Dynamic Video Inbetweening with Flexible Controls

Add code
Dec 17, 2024
Viaarxiv icon

Move-in-2D: 2D-Conditioned Human Motion Generation

Add code
Dec 17, 2024
Viaarxiv icon

ShotVL: Human-Centric Highlight Frame Retrieval via Language Queries

Add code
Dec 17, 2024
Viaarxiv icon

VividFace: A Diffusion-Based Hybrid Framework for High-Fidelity Video Face Swapping

Add code
Dec 15, 2024
Viaarxiv icon

IF-MDM: Implicit Face Motion Diffusion Model for High-Fidelity Realtime Talking Head Generation

Add code
Dec 05, 2024
Figure 1 for IF-MDM: Implicit Face Motion Diffusion Model for High-Fidelity Realtime Talking Head Generation
Figure 2 for IF-MDM: Implicit Face Motion Diffusion Model for High-Fidelity Realtime Talking Head Generation
Figure 3 for IF-MDM: Implicit Face Motion Diffusion Model for High-Fidelity Realtime Talking Head Generation
Figure 4 for IF-MDM: Implicit Face Motion Diffusion Model for High-Fidelity Realtime Talking Head Generation
Viaarxiv icon

BlendServe: Optimizing Offline Inference for Auto-regressive Large Models with Resource-aware Batching

Add code
Nov 25, 2024
Viaarxiv icon

CV-Cities: Advancing Cross-View Geo-Localization in Global Cities

Add code
Nov 19, 2024
Viaarxiv icon

Morpho-Aware Global Attention for Image Matting

Add code
Nov 15, 2024
Figure 1 for Morpho-Aware Global Attention for Image Matting
Figure 2 for Morpho-Aware Global Attention for Image Matting
Figure 3 for Morpho-Aware Global Attention for Image Matting
Figure 4 for Morpho-Aware Global Attention for Image Matting
Viaarxiv icon

NEO: Saving GPU Memory Crisis with CPU Offloading for Online LLM Inference

Add code
Nov 02, 2024
Figure 1 for NEO: Saving GPU Memory Crisis with CPU Offloading for Online LLM Inference
Figure 2 for NEO: Saving GPU Memory Crisis with CPU Offloading for Online LLM Inference
Figure 3 for NEO: Saving GPU Memory Crisis with CPU Offloading for Online LLM Inference
Figure 4 for NEO: Saving GPU Memory Crisis with CPU Offloading for Online LLM Inference
Viaarxiv icon

BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays

Add code
Oct 29, 2024
Viaarxiv icon