With the recent adoption of laws supporting the ``right to be forgotten'' and the widespread use of Graph Neural Networks for modeling graph-structured data, graph unlearning has emerged as a crucial research area. Current studies focus on the efficient update of model parameters. However, they often overlook the time-consuming re-computation of graph propagation required for each removal, significantly limiting their scalability on large graphs. In this paper, we present ScaleGUN, the first certifiable graph unlearning mechanism that scales to billion-edge graphs. ScaleGUN employs a lazy local propagation method to facilitate efficient updates of the embedding matrix during data removal. Such lazy local propagation can be proven to ensure certified unlearning under all three graph unlearning scenarios, including node feature, edge, and node unlearning. Extensive experiments on real-world datasets demonstrate the efficiency and efficacy of ScaleGUN. Remarkably, ScaleGUN accomplishes $(\epsilon,\delta)=(1,10^{-4})$ certified unlearning on the billion-edge graph ogbn-papers100M in 20 seconds for a $5K$-random-edge removal request -- of which only 5 seconds are required for updating the embedding matrix -- compared to 1.91 hours for retraining and 1.89 hours for re-propagation. Our code is available online.