Abstract:Understanding the motion of articulated mechanical assemblies from static geometry remains a core challenge in 3D perception and design automation. Prior work on everyday articulated objects such as doors and laptops typically assumes simplified kinematic structures or relies on joint annotations. However, in mechanical assemblies like gears, motion arises from geometric coupling, through meshing teeth or aligned axes, making it difficult for existing methods to reason about relational motion from geometry alone. To address this gap, we introduce MechBench, a benchmark dataset of 693 diverse synthetic gear assemblies with part-wise ground-truth motion trajectories. MechBench provides a structured setting to study coupled motion, where part dynamics are induced by contact and transmission rather than predefined joints. Building on this, we propose DYNAMO, a dependency-aware neural model that predicts per-part SE(3) motion trajectories directly from segmented CAD point clouds. Experiments show that DYNAMO outperforms strong baselines, achieving accurate and temporally consistent predictions across varied gear configurations. Together, MechBench and DYNAMO establish a novel systematic framework for data-driven learning of coupled mechanical motion in CAD assemblies.
Abstract:Understanding the evolution of human society, as a complex adaptive system, is a task that has been looked upon from various angles. In this paper, we simulate an agent-based model with a high enough population tractably. To do this, we characterize an entity called \textit{society}, which helps us reduce the complexity of each step from $\mathcal{O}(n^2)$ to $\mathcal{O}(n)$. We propose a very realistic setting, where we design a joint alternate maximization step algorithm to maximize a certain \textit{fitness} function, which we believe simulates the way societies develop. Our key contributions include (i) proposing a novel protocol for simulating the evolution of a society with cheap, non-optimal joint alternate maximization steps (ii) providing a framework for carrying out experiments that adhere to this joint-optimization simulation framework (iii) carrying out experiments to show that it makes sense empirically (iv) providing an alternate justification for the use of \textit{society} in the simulations.
Abstract:3D shape models are naturally parameterized using vertices and faces, \ie, composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent `geometry images' representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images.