In practical application, the widespread deployment of diffusion models often necessitates substantial investment in training. As diffusion models find increasingly diverse applications, concerns about potential misuse highlight the imperative for robust intellectual property protection. Current protection strategies either employ backdoor-based methods, integrating a watermark task as a simpler training objective with the main model task, or embedding watermarks directly into the final output samples. However, the former approach is fragile compared to existing backdoor defense techniques, while the latter fundamentally alters the expected output. In this work, we introduce a novel watermarking framework by embedding the watermark into the whole diffusion process, and theoretically ensure that our final output samples contain no additional information. Furthermore, we utilize statistical algorithms to verify the watermark from internally generated model samples without necessitating triggers as conditions. Detailed theoretical analysis and experimental validation demonstrate the effectiveness of our proposed method.