Abstract:Recently proposed 3D object reconstruction methods represent a mesh with an atlas - a set of planar patches approximating the surface. However, their application in a real-world scenario is limited since the surfaces of reconstructed objects contain discontinuities, which degrades the quality of the final mesh. This is mainly caused by independent processing of individual patches, and in this work, we postulate to mitigate this limitation by preserving local consistency around patch vertices. To that end, we introduce a Locally Conditioned Atlas (LoCondA), a framework for representing a 3D object hierarchically in a generative model. Firstly, the model maps a point cloud of an object into a sphere. Secondly, by leveraging a spherical prior, we enforce the mapping to be locally consistent on the sphere and on the target object. This way, we can sample a mesh quad on that sphere and project it back onto the object's manifold. With LoCondA, we can produce topologically diverse objects while maintaining quads to be stitched together. We show that the proposed approach provides structurally coherent reconstructions while producing meshes of quality comparable to the competitors.
Abstract:In this work, we propose a novel method for generating 3D point clouds that leverage properties of hyper networks. Contrary to the existing methods that learn only the representation of a 3D object, our approach simultaneously finds a representation of the object and its 3D surface. The main idea of our HyperCloud method is to build a hyper network that returns weights of a particular neural network (target network) trained to map points from a uniform unit ball distribution into a 3D shape. As a consequence, a particular 3D shape can be generated using point-by-point sampling from the assumed prior distribution and transforming sampled points with the target network. Since the hyper network is based on an auto-encoder architecture trained to reconstruct realistic 3D shapes, the target network weights can be considered a parametrization of the surface of a 3D shape, and not a standard representation of point cloud usually returned by competitive approaches. The proposed architecture allows finding mesh-based representation of 3D objects in a generative manner while providing point clouds en pair in quality with the state-of-the-art methods.