Abstract:In this work, we present a boundary and hole detection approach that traverses all the boundaries of an edge-manifold triangular mesh, irrespectively of the presence of singular vertices, and subsequently determines and labels all holes of the mesh. The proposed automated hole-detection method is valuable to the computer-aided design (CAD) community as all half-edges within the mesh are utilized and for each half-edge the algorithm guarantees both the existence and the uniqueness of the boundary associated to it. As existing hole-detection approaches assume that singular vertices are absent or may require mesh modification, these methods are ill-equipped to detect boundaries/holes in real-world meshes that contain singular vertices. We demonstrate the method in an underwater autonomous robotic application, exploiting surface reconstruction methods based on point cloud data. In such a scenario the determined holes can be interpreted as information gaps, enabling timely corrective action during the data acquisition. However, the scope of our method is not confined to these two sectors alone; it is versatile enough to be applied on any edge-manifold triangle mesh. An evaluation of the method is performed on both synthetic and real-world data (including a triangle mesh from a point cloud obtained by a multibeam sonar). The source code of our reference implementation is available: https://github.com/Mauhing/hole-detection-on-triangle-mesh.
Abstract:While the use of neural radiance fields (NeRFs) in different challenging settings has been explored, only very recently have there been any contributions that focus on the use of NeRF in foggy environments. We argue that the traditional NeRF models are able to replicate scenes filled with fog and propose a method to remove the fog when synthesizing novel views. By calculating the global contrast of a scene, we can estimate a density threshold that, when applied, removes all visible fog. This makes it possible to use NeRF as a way of rendering clear views of objects of interest located in fog-filled environments. Additionally, to benchmark performance on such scenes, we introduce a new dataset that expands some of the original synthetic NeRF scenes through the addition of fog and natural environments. The code, dataset, and video results can be found on our project page: https://vegardskui.com/fognerf/
Abstract:Building on the success of Neural Radiance Fields (NeRFs), recent years have seen significant advances in the domain of novel view synthesis. These models capture the scene's volumetric radiance field, creating highly convincing dense photorealistic models through the use of simple, differentiable rendering equations. Despite their popularity, these algorithms suffer from severe ambiguities in visual data inherent to the RGB sensor, which means that although images generated with view synthesis can visually appear very believable, the underlying 3D model will often be wrong. This considerably limits the usefulness of these models in practical applications like Robotics and Extended Reality (XR), where an accurate dense 3D reconstruction otherwise would be of significant value. In this technical report, we present the vital differences between view synthesis models and 3D reconstruction models. We also comment on why a depth sensor is essential for modeling accurate geometry in general outward-facing scenes using the current paradigm of novel view synthesis methods. Focusing on the structure-from-motion task, we practically demonstrate this need by extending the Plenoxel radiance field model: Presenting an analytical differential approach for dense mapping and tracking with radiance fields based on RGB-D data without a neural network. Our method achieves state-of-the-art results in both the mapping and tracking tasks while also being faster than competing neural network-based approaches.