Transformation Synchronization is the problem of recovering absolute transformations from a given set of pairwise relative motions. Despite its usefulness, the problem remains challenging due to the influences from noisy and outlier relative motions, and the difficulty to model analytically and suppress them with high fidelity. In this work, we avoid handcrafting robust loss functions, and propose to use graph neural networks (GNNs) to learn transformation synchronization. Unlike previous works which use complicated multi-stage pipelines, we use an iterative approach where each step consists of a single weight-shared message passing layer that refines the absolute poses from the previous iteration by predicting an incremental update in the tangent space. To reduce the influence of outliers, the messages are weighted before aggregation. Our iterative approach alleviates the need for an explicit initialization step and performs well with identity initial poses. Although our approach is simple, we show that it performs favorably against existing handcrafted and learned synchronization methods through experiments on both SO(3) and SE(3) synchronization.