I investigate the capability of small transformers to compute the greatest common divisor (GCD) of two positive integers. When the training distribution and the representation base are carefully chosen, models achieve 98% accuracy and correctly predict 91 of the 100 first GCD. Model predictions are deterministic and fully interpretable. During training, the models learn to cluster input pairs with the same GCD, and classify them by their divisors. Basic models, trained from uniform operands encoded on small bases, only compute a handful of GCD (up to 38 out of 100): the products of divisors of the base. Longer training and larger bases allow some models to "grok" small prime GCD. Training from log-uniform operands boosts performance to 73 correct GCD, and balancing the training distribution of GCD, from inverse square to log-uniform, to 91 GCD. Training models from a uniform distribution of GCD breaks the deterministic model behavior.