3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Exploring Modality Guidance to Enhance VFM-based Feature Fusion for UDA in 3D Semantic Segmentation

Add code
Apr 19, 2025
Viaarxiv icon

Occlusion-Ordered Semantic Instance Segmentation

Add code
Apr 18, 2025
Viaarxiv icon

3D-PointZshotS: Geometry-Aware 3D Point Cloud Zero-Shot Semantic Segmentation Narrowing the Visual-Semantic Gap

Add code
Apr 16, 2025
Viaarxiv icon

Digital Twin Generation from Visual Data: A Survey

Add code
Apr 17, 2025
Viaarxiv icon

Training-Free Hierarchical Scene Understanding for Gaussian Splatting with Superpoint Graphs

Add code
Apr 17, 2025
Viaarxiv icon

TextDiffSeg: Text-guided Latent Diffusion Model for 3d Medical Images Segmentation

Add code
Apr 16, 2025
Viaarxiv icon

DSM: Building A Diverse Semantic Map for 3D Visual Grounding

Add code
Apr 11, 2025
Viaarxiv icon

TextSplat: Text-Guided Semantic Fusion for Generalizable Gaussian Splatting

Add code
Apr 13, 2025
Viaarxiv icon

RayFronts: Open-Set Semantic Ray Frontiers for Online Scene Understanding and Exploration

Add code
Apr 09, 2025
Viaarxiv icon

Turin3D: Evaluating Adaptation Strategies under Label Scarcity in Urban LiDAR Segmentation with Semi-Supervised Techniques

Add code
Apr 08, 2025
Viaarxiv icon