The diffusion model has shown remarkable success in computer vision, but it remains unclear whether ODE-based probability flow or SDE-based diffusion models are superior and under what circumstances. Comparing the two is challenging due to dependencies on data distribution, score training, and other numerical factors. In this paper, we examine the problem mathematically by examining two limiting scenarios: the ODE case and the large diffusion case. We first introduce a pulse-shape error to perturb the score function and analyze error accumulation, with a generalization to arbitrary error. Our findings indicate that when the perturbation occurs at the end of the generative process, the ODE model outperforms the SDE model (with a large diffusion coefficient). However, when the perturbation occurs earlier, the SDE model outperforms the ODE model, and we demonstrate that the error of sample generation due to pulse-shape error can be exponentially suppressed as the diffusion term's magnitude increases to infinity. Numerical validation of this phenomenon is provided using toy models such as Gaussian, Gaussian mixture models, and Swiss roll. Finally, we experiment with MNIST and observe that varying the diffusion coefficient can improve sample quality even when the score function is not well trained.