Abstract:Panoramic radiography is a widely used imaging modality in dental practice and research. However, it only provides flattened 2D images, which limits the detailed assessment of dental structures. In this paper, we propose Occudent, a framework for 3D teeth reconstruction from panoramic radiographs using neural implicit functions, which, to the best of our knowledge, is the first work to do so. For a given point in 3D space, the implicit function estimates whether the point is occupied by a tooth, and thus implicitly determines the boundaries of 3D tooth shapes. Firstly, Occudent applies multi-label segmentation to the input panoramic radiograph. Next, tooth shape embeddings as well as tooth class embeddings are generated from the segmentation outputs, which are fed to the reconstruction network. A novel module called Conditional eXcitation (CX) is proposed in order to effectively incorporate the combined shape and class embeddings into the implicit function. The performance of Occudent is evaluated using both quantitative and qualitative measures. Importantly, Occudent is trained and validated with actual panoramic radiographs as input, distinct from recent works which used synthesized images. Experiments demonstrate the superiority of Occudent over state-of-the-art methods.
Abstract:Panoramic radiography (panoramic X-ray, PX) is a widely used imaging modality for dental examination. However, its applicability is limited as compared to 3D Cone-beam computed tomography (CBCT), because PX only provides 2D flattened images of the oral structure. In this paper, we propose a new framework which estimates 3D oral structure from real-world PX images. Since there are not many matching PX and CBCT data, we used simulated PX from CBCT for training, however, we used real-world panoramic radiographs at the inference time. We propose a new ray-sampling method to make simulated panoramic radiographs inspired by the principle of panoramic radiography along with the rendering function derived from the Beer-Lambert law. Our model consists of three parts: translation module, generation module, and refinement module. The translation module changes the real-world panoramic radiograph to the simulated training image style. The generation module makes the 3D structure from the input image without any prior information such as a dental arch. Our ray-based generation approach makes it possible to reverse the process of generating PX from oral structure in order to reconstruct CBCT data. Lastly, the refinement module enhances the quality of the 3D output. Results show that our approach works better for simulated and real-world images compared to other state-of-the-art methods.
Abstract:Different to the conventional terrestrial network, an unmanned aerial vehicle (UAV) network is required to serve aerial users (AUs) as well as ground users (GUs). In order to serve both GUs and AUs, we first consider two base station (BS) service schemes: the inclusive-service BS (IS-BS) scheme, which makes BSs serve both GUs and AUs simultaneously, and the exclusive-service BS (ES-BS) scheme, which has BSs for GUs and BSs for AUs exclusively. We also model a BS antenna power gain, which is determined by the BS antenna tilt angle and the horizontal distance between a BS and a user (GU or AU). For each BS service scheme, we derive a network outage probability by taking into account the characteristics of the BS antenna power gain and channel components for line-of-sight (LoS) and non-line-of-sight (NLoS) environments. Finally, we show the effect of the total BS density, the interfering BS density, and the user density on the optimal BS antenna tilt angle and the network outage probability, and provide an appropriate BS service scheme for different network setups