Picture for Kyle Olszewski

Kyle Olszewski

Contextual Gesture: Co-Speech Gesture Video Generation through Context-aware Gesture Representation

Add code
Feb 11, 2025
Viaarxiv icon

AutoDecoding Latent 3D Diffusion Models

Add code
Jul 07, 2023
Viaarxiv icon

Unsupervised Volumetric Animation

Add code
Jan 26, 2023
Figure 1 for Unsupervised Volumetric Animation
Figure 2 for Unsupervised Volumetric Animation
Figure 3 for Unsupervised Volumetric Animation
Figure 4 for Unsupervised Volumetric Animation
Viaarxiv icon

ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes

Add code
Dec 12, 2022
Figure 1 for ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes
Figure 2 for ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes
Figure 3 for ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes
Figure 4 for ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes
Viaarxiv icon

Cross-Modal 3D Shape Generation and Manipulation

Add code
Jul 24, 2022
Figure 1 for Cross-Modal 3D Shape Generation and Manipulation
Figure 2 for Cross-Modal 3D Shape Generation and Manipulation
Figure 3 for Cross-Modal 3D Shape Generation and Manipulation
Figure 4 for Cross-Modal 3D Shape Generation and Manipulation
Viaarxiv icon

Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation

Add code
Jun 15, 2022
Figure 1 for Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
Figure 2 for Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
Figure 3 for Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
Figure 4 for Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
Viaarxiv icon

Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation

Add code
Apr 22, 2022
Figure 1 for Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
Figure 2 for Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
Figure 3 for Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
Figure 4 for Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
Viaarxiv icon

Quantized GAN for Complex Music Generation from Dance Videos

Add code
Apr 01, 2022
Figure 1 for Quantized GAN for Complex Music Generation from Dance Videos
Figure 2 for Quantized GAN for Complex Music Generation from Dance Videos
Figure 3 for Quantized GAN for Complex Music Generation from Dance Videos
Figure 4 for Quantized GAN for Complex Music Generation from Dance Videos
Viaarxiv icon

R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis

Add code
Mar 31, 2022
Figure 1 for R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
Figure 2 for R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
Figure 3 for R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
Figure 4 for R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
Viaarxiv icon

Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning

Add code
Mar 04, 2022
Figure 1 for Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Figure 2 for Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Figure 3 for Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Figure 4 for Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Viaarxiv icon