https://github.com/MixedRealityETHZ/ImageStabilization.
With the advent of new technologies, Augmented Reality (AR) has become an effective tool in remote collaboration. Narrow field-of-view (FoV) and motion blur can offer an unpleasant experience with limited cognition for remote viewers of AR headsets. In this article, we propose a two-stage pipeline to tackle this issue and ensure a stable viewing experience with a larger FoV. The solution involves an offline 3D reconstruction of the indoor environment, followed by enhanced rendering using only the live poses of AR device. We experiment with and evaluate the two different 3D reconstruction methods, RGB-D geometric approach and Neural Radiance Fields (NeRF), based on their data requirements, reconstruction quality, rendering, and training times. The generated sequences from these methods had smoother transitions and provided a better perspective of the environment. The geometry-based enhanced FoV method had better renderings as it lacked blurry outputs making it better than the other attempted approaches. Structural Similarity Index (SSIM) and Peak Signal to Noise Ratio (PSNR) metrics were used to quantitatively show that the rendering quality using the geometry-based enhanced FoV method is better. Link to the code repository -