Picture for Jules Sanchez

Jules Sanchez

COLA: COarse-LAbel multi-source LiDAR semantic segmentation for autonomous driving

Add code
Nov 06, 2023
Figure 1 for COLA: COarse-LAbel multi-source LiDAR semantic segmentation for autonomous driving
Figure 2 for COLA: COarse-LAbel multi-source LiDAR semantic segmentation for autonomous driving
Figure 3 for COLA: COarse-LAbel multi-source LiDAR semantic segmentation for autonomous driving
Figure 4 for COLA: COarse-LAbel multi-source LiDAR semantic segmentation for autonomous driving
Viaarxiv icon

ParisLuco3D: A high-quality target dataset for domain generalization of LiDAR perception

Add code
Oct 25, 2023
Figure 1 for ParisLuco3D: A high-quality target dataset for domain generalization of LiDAR perception
Figure 2 for ParisLuco3D: A high-quality target dataset for domain generalization of LiDAR perception
Figure 3 for ParisLuco3D: A high-quality target dataset for domain generalization of LiDAR perception
Figure 4 for ParisLuco3D: A high-quality target dataset for domain generalization of LiDAR perception
Viaarxiv icon

Domain generalization of 3D semantic segmentation in autonomous driving

Add code
Dec 07, 2022
Figure 1 for Domain generalization of 3D semantic segmentation in autonomous driving
Figure 2 for Domain generalization of 3D semantic segmentation in autonomous driving
Figure 3 for Domain generalization of 3D semantic segmentation in autonomous driving
Figure 4 for Domain generalization of 3D semantic segmentation in autonomous driving
Viaarxiv icon

COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse LiDAR datasets

Add code
Feb 14, 2022
Figure 1 for COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse LiDAR datasets
Figure 2 for COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse LiDAR datasets
Figure 3 for COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse LiDAR datasets
Figure 4 for COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse LiDAR datasets
Viaarxiv icon

EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case

Add code
Apr 24, 2021
Figure 1 for EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Figure 2 for EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Figure 3 for EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Figure 4 for EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Viaarxiv icon