Picture for Manmohan Chandraker

Manmohan Chandraker

Materialist: Physically Based Editing Using Single-Image Inverse Rendering

Add code
Jan 07, 2025
Figure 1 for Materialist: Physically Based Editing Using Single-Image Inverse Rendering
Figure 2 for Materialist: Physically Based Editing Using Single-Image Inverse Rendering
Figure 3 for Materialist: Physically Based Editing Using Single-Image Inverse Rendering
Figure 4 for Materialist: Physically Based Editing Using Single-Image Inverse Rendering
Viaarxiv icon

Drive-1-to-3: Enriching Diffusion Priors for Novel View Synthesis of Real Vehicles

Add code
Dec 19, 2024
Viaarxiv icon

Robust Disaster Assessment from Aerial Imagery Using Text-to-Image Synthetic Data

Add code
May 22, 2024
Viaarxiv icon

A Minimalist Prompt for Zero-Shot Policy Learning

Add code
May 09, 2024
Viaarxiv icon

Instantaneous Perception of Moving Objects in 3D

Add code
May 05, 2024
Viaarxiv icon

LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes

Add code
May 04, 2024
Viaarxiv icon

DiL-NeRF: Delving into Lidar for Neural Radiance Field on Street Scenes

Add code
May 01, 2024
Viaarxiv icon

Efficient Transformer Encoders for Mask2Former-style models

Add code
Apr 23, 2024
Figure 1 for Efficient Transformer Encoders for Mask2Former-style models
Figure 2 for Efficient Transformer Encoders for Mask2Former-style models
Figure 3 for Efficient Transformer Encoders for Mask2Former-style models
Figure 4 for Efficient Transformer Encoders for Mask2Former-style models
Viaarxiv icon

Progressive Token Length Scaling in Transformer Encoders for Efficient Universal Segmentation

Add code
Apr 23, 2024
Figure 1 for Progressive Token Length Scaling in Transformer Encoders for Efficient Universal Segmentation
Figure 2 for Progressive Token Length Scaling in Transformer Encoders for Efficient Universal Segmentation
Figure 3 for Progressive Token Length Scaling in Transformer Encoders for Efficient Universal Segmentation
Figure 4 for Progressive Token Length Scaling in Transformer Encoders for Efficient Universal Segmentation
Viaarxiv icon

Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement

Add code
Apr 06, 2024
Figure 1 for Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement
Figure 2 for Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement
Figure 3 for Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement
Figure 4 for Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement
Viaarxiv icon