Picture for Jihan Yang

Jihan Yang

Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs

Add code
Jun 24, 2024
Figure 1 for Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
Figure 2 for Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
Figure 3 for Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
Figure 4 for Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
Viaarxiv icon

Can 3D Vision-Language Models Truly Understand Natural Language?

Add code
Mar 28, 2024
Viaarxiv icon

V-IRL: Grounding Virtual Intelligence in Real Life

Add code
Feb 05, 2024
Viaarxiv icon

Lowis3D: Language-Driven Open-World Instance-Level 3D Scene Understanding

Add code
Aug 01, 2023
Viaarxiv icon

RegionPLC: Regional Point-Language Contrastive Learning for Open-World 3D Scene Understanding

Add code
Apr 03, 2023
Viaarxiv icon

Language-driven Open-Vocabulary 3D Scene Understanding

Add code
Nov 29, 2022
Viaarxiv icon

Towards Efficient 3D Object Detection with Knowledge Distillation

Add code
May 30, 2022
Figure 1 for Towards Efficient 3D Object Detection with Knowledge Distillation
Figure 2 for Towards Efficient 3D Object Detection with Knowledge Distillation
Figure 3 for Towards Efficient 3D Object Detection with Knowledge Distillation
Figure 4 for Towards Efficient 3D Object Detection with Knowledge Distillation
Viaarxiv icon

DODA: Data-oriented Sim-to-Real Domain Adaptation for 3D Indoor Semantic Segmentation

Add code
Apr 04, 2022
Figure 1 for DODA: Data-oriented Sim-to-Real Domain Adaptation for 3D Indoor Semantic Segmentation
Figure 2 for DODA: Data-oriented Sim-to-Real Domain Adaptation for 3D Indoor Semantic Segmentation
Figure 3 for DODA: Data-oriented Sim-to-Real Domain Adaptation for 3D Indoor Semantic Segmentation
Figure 4 for DODA: Data-oriented Sim-to-Real Domain Adaptation for 3D Indoor Semantic Segmentation
Viaarxiv icon

Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability

Add code
Mar 26, 2022
Figure 1 for Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Figure 2 for Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Figure 3 for Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Figure 4 for Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Viaarxiv icon

ST3D++: Denoised Self-training for Unsupervised Domain Adaptation on 3D Object Detection

Add code
Aug 15, 2021
Figure 1 for ST3D++: Denoised Self-training for Unsupervised Domain Adaptation on 3D Object Detection
Figure 2 for ST3D++: Denoised Self-training for Unsupervised Domain Adaptation on 3D Object Detection
Figure 3 for ST3D++: Denoised Self-training for Unsupervised Domain Adaptation on 3D Object Detection
Figure 4 for ST3D++: Denoised Self-training for Unsupervised Domain Adaptation on 3D Object Detection
Viaarxiv icon